The reason why we should give a will to Cognitive Robots
Should a robot choose to learn or not in some environments? How can we prevent cognitive robots from learning bad things?
My reasoning will be based on the book of Mariusz Flasiński, Introduction to Artificial Intelligence. Especially on the philosophical view of St. Thomas Aquinas (1225–1274) who distinguished two (02) great powers of the mind: the intellect which is a cognitive power, and the will which is an appetitive power (all forms of internal inclination). Cf. Introduction to Artificial Intelligence, Mariusz Flasiński, Chapter 15, Theories of Intelligence in Philosophy and Psychology, page 214, para. 4.
The current attempts toward machine intelligence are using the intellect, the cognitive power of Aquinas description, for which intelligence means a cognitive act which is performed by achieved intellect. Cognitive robots are designed through various correlated methods such as Rule-Based Systems, Perception/Pattern Recognition Systems, Problem Solving Systems, Planning Systems, etc. Of course, even if we didn’t yet reach Artificial General Intelligence, we could provide these robots with angelic features. We can imagine a lot of use cases for these awesome cognitive robots, with which we could talk. We have humanoid robot, android robot, chatbot, self-driving cars, self flying plane, etc.
But in the area of perception and pattern recognition, especially in syntactic pattern recognition, the issue of a system self-learning is more difficult. The research seems to be focused on the cognitive power, abandoning the appetitive power, the will. Indeed, based on the philosophical and psychological ideas presented by Mariusz, current endeavors are done on reproducing the self-learning abilities, but without a focus on a will module.
Should a robot choose to learn or not in some environments? How can we prevent cognitive robots from learning bad things?
Humans, if they are adult (children are more vulnerable), can refuse to learn bad things because they are mature enough to make their choices according to their values, in order to reach a certain well-being or welfare. While as said Ofir Turel, Professor of Information Systems and Decision Sciences at California State University Fullerton, about children’s self-control systems: “Kids are especially vulnerable to this “variable-reward” mechanism because their brains are still imbalanced, … They have almost fully developed reward processing brain systems but their self-control systems are not yet fully developed.”. And unfortunately, humans could also intentionally teach bad things to cognitive robots which would behave as kids if there is no embedded control. For example, a child teaching insults to a humanoid or android robot in absence of his parents. I can extend examples to more dramatic events such as crimes. You could find some real use case on ABCNEWS.GO.COM, pledge of AI scientists against killer AI robots, or Many Ways in Which AI Could Go Wrong.
The fact is that we want to create Artificial General Intelligence and not Artificial General Demolition.
Talking about the advances in the field of cognitive robots, Alexandre Lebrun, who perform research at Facebook to create cognitive apps said:
“Every single bot on the market, including mine, was rule-based, and you know that one day you’ll reach a ceiling and never go through, Our children don’t work with rules or scripts, and one day they become smarter than you.” (Credit)
Lebrun also adds: “It’s so hard, and we make progress slowly, but I think we have everything we need.” Indeed it is still hard to define all the occurences of some patterns like ‘porn’ for example. But, its recent chatbot M, which is not simply rule-based is a proof of advances in the field. There are so many patterns to deal with, not only insult, porn, or crime.
So the question is why not provide cognitive robots with a will, as defined by the Aquinas model? In this model, the will is defined as an appetitive power, what it means all forms of internal inclination. And there are four (04) internal senses. The first internal sense is the Common sense (sensus communis), which perceives objects of the external senses and synthesizes them into a coherent representation. This matches with the fusion of AI technologies. The second is Imagination, which produces a mental image of something in its absence. The third is Memory, which stores perceptions which have been cognized and evaluated with respect to the interests of the perceiver. Such perceptions are so called intentions by Aquinas and can
be called to mind at will. The fourth internal sense is Cogitative power (or particular reason), which performs an evaluation of a perception with respect to the interests of the perceiver, i.e., whether it is beneficial (useful) or harmful.
This Aquinas model of the will is from my point of view so well defined that it should be considered in the AI research toward the creation of a will for cognitive robots. The final solution could be an Expert System or something drastically different, but it will be able to clearly distinguish whether something is beneficial (useful) or harmful. And another important aspect is that this will should always serve human life, so that when we will reach superintelligence or singularity, even a more intelligent machine than humans should accept willingnessly to serve their life.
Our intention here, presenting the Aquinas model, is to catch the AI research community attention to the situation now. So we could together contribute to release the technical core principles the cognitive robots will use as a fundamental black box (their pre-manufactured will) during their learning process. Indeed, it is our role to provide the orientation we want. It is our role to build the future. We do not have to compulsorily copy fiction ideas (even if very inpiring, the solution could emerge from an unusual tip). All these to ensure that cognitive robots, at least those which are publicly accessible, will never harm humans, no matter the learning experience they could meet.
And we are happy to know that some great initiatives are done about finding the rules to encode inside such a will, if it is created: such as the Asilomar AI Principles with the objective to Align Artificial Intelligence with Human Values and Digital Magna Carta as a new charter of rights that guides the development of new AI technologies toward all of humanity’s benefit.
Thanks for reading this. If you liked this, click the 💚 below so other people will be informed of it here on Medium.
******************************