Artificial Super Intelligence and the Human as a Rational Agent

Much has been written about AI, Super AI and what will become of it. But we ain’t seen nothing yet. Literally. It is far from idle to think fearfully or wishfully about the future and what the consequences will be of advanced forms of AI. And this is not the future, because we are right in the middle of it: the Era of AI. Weak AI. Can we call it at all ‘intelligence’? Wait till Strong AI is here: Artificial General Intelligence, Artificial Super Intelligence, affectionately abbreviated with AGI and ASI. Impersonated as the average smart guy[1] and its super smart sibling. The average smart guy, because it only resembles the intelligence of the human. But does it?

I will not speculate here what ASI as a Singleton[2] or Master Algorithm[3] ultimately will look like. I will point out how we already are part of an ASI in becoming, and right in the middle of the AI paradigm shift that is taking place. Mind, that I don’t mean with ‘in the middle’ ‘human centred’, in the connotation in which this term currently is applied to AI development in professional environments and society.

AGI stands for human-level intelligence. This means that AGI should be able to do what a human can do: from among others, preparing and serving a cup of coffee, writing scientific research, or convincing humans that it is a biological human. The human being is quirky. That is why it is not desirable to create human-like artificial intelligence, unless it is the interface of the AI that acts like a quirky human with the goal to connect on an emotional level, and by doing so gains trust. AGI is the artificial representation of what an ideal rational human would be like. Its primitive predecessor AI is a rational agent, whose algorithms are designed to achieve an as good as possible desirable accomplishment based on a calculative estimation of trade-offs in a certain environment that provides the digital input for the algorithm; the ideal human derives his image from the humanistic ideal of man,[4] who through reason and knowledge increases his wellbeing and that of his fellow man.

Human intelligence has been described in many ways, but overall it is characterised by the successful interaction of an individual with an external environment, problem or situation.[5] Where an intelligent robot will have difficulties escaping from a locked room on fire; a human will throw a chair through the window and be able to escape. This is also one of the reason that humans are still in the loop in automated systems. AI can only make decisions based on historical data and compare this with actual data to predict what will happen and what it is going to do. Thus, if a situation is a novel combination of events, no matter how trivial, the AI will have no solution but knocking on the shoulder of the human, communicating empathically: “Buddy, it is your turn now”, as for example in the case of ‘autonomous’ driving cars, or process plants.

In order to make this interaction work, we need trustworthy AI. This means that humans trust the AI to do the right thing, and that AI trusts humans to do what they can’t do to accomplish successfully their common goal. However, human trust differs from the trust of AI. Human trust can be affect-based and or rational; the trust of AI can only be rational. Rational trust holds that the rational agent is ‘able to choose the best option for itself, given a specific scenario and a goal to achieve’[6]. Trust is here a calculated consideration, where individual benefits of trusting for personal gain are being weighed against the risks that the trustee will not do what it is trusted to do.[7] This type of trust depends upon someone’s (or something’s) measurable values that indicate someone’s trustworthiness, such as: reliability, knowledge, competence, etc. Humans, although ideally they trust rationally[8], trust often emotionally. It doesn’t matter how much information or convincing material you provide the sceptical person, he or she will still resist. That said, trust as an ‘attitude of the heart’[9] sounds in the ears of the computer or cognitive scientist as still something that can be understood rationally[10], and therefore be solved. And they are right. Just understand what makes that heart tick, and you gain the trust. Reach the cognition through the affect by means of affective computing, and you have the solution to this problem.

To my opinion, it is beyond doubt that strong AI will ultimately be developed. However, as I pointed out earlier, human-level is not human-like. A semantic mistake unfortunately often made. AGI is a computational rational representation of the human mind. This implies a scientific viewpoint of what a human is, and a reduction of reality. “Only that which can be measured is real”, in the words of the German physicist Max Planck[11]. Whatever can be put in tiny building blocks, is; and with this comes manageability. It is therefore of utmost important for the successfulness of AI, which is developed to accomplish goals as effective and efficient as possible, to reduce everything to LEGO®. This seems pragmatically legitimate from a utilitarian point of view (unless you step on it with your bare foot), hypothesizing that AI will only have benevolent intentions, in order to make the world a better place with better people.

Since human consciousness, whether or not the result of evolution or female disobedience for eating forbidden fruits, we strive for happiness. Our intelligence and technology as a result of our intelligence have always been the path to accomplish this goal.[12] Both the creation of novel technology and belief systems, which are also a result of our intelligence, provide humanity with a handle to get grip on suffering.

Science ticks both boxes. Scientific principles are the foundation of technology; and, science is a belief system. It is the belief that reality can be comprehended through scientific principles, whereby truth is understood pragmatically: truth is the working solution through scientific inquiry of a certain problem. And the question related to the suffering-problem is: how to improve human wellbeing? (which include of course also the natural environment) Acknowledging and accepting this, legitimates the chosen path to reduce ourselves to LEGO®, including our mind.

In the last 350 years, major technological changes have taken place with an accelerating speed up until today. Mechanisation, the adding of energy resources to it, such as the use of electricity and the development of the internal combustion engine, lead to the growth and improvement of industrial processes. Halfway the previous century, nuclear energy enabled the creation and use of electronics to rise both for the industry and for private consumers. Ongoing automation, developments in computer science and the invention and global use of Internet led us to the necessity of further improvement of what we have been creating technologically till thus far, by means of… the optimization of that what brought us technology in the first place: intelligence.

The colloquial term Internet of Things (IoT), which is the ambition to connect everything with everything to serve a certain goal by means of communicating sensors, computational devices and actuators, is right now entering a new phase, where the computational device is not only measuring, analysing and controlling, but is becoming an intentional and active intelligent agent itself[13]. This means that the IoT evolves into a network of rational agents. (I do not see the necessity here to distinguish current ramifications of this term, such as Internet of Humans, Services, Senses etc., since it all boils down to ‘whatever you put sensors on and connect to a computational device’).

Although the term confusingly seems to imply that there is one IoT, in fact there are many systems involving AI that function independently or in a cooperative form underlying this broad concept. Therefore, in order to be better able to describe the vision of ASI as a far advanced IoT, I will introduce here the term cyber-physical system (CPS). This term implies the connection of physical elements through a cyber network engineered to accomplish a specific goal. CPS’s tend to understand and have successful agency in a changing environment, for example in a particular industry, a complex automated military operation or an autonomous driving car. They are developed to be adaptive.

Where IoT raises images of communicating refrigerators and washing machines, running shoes with smart watches and insurance companies, and very likely one day, nanobots in your guts with your brain-computer interface; CPS signifies the system underlying these phenomena. As the exemplary smart watch and brain-computer interface already indicate, the CPS includes the human, which is therefore called: cyber-physical-human system (CPHS) or socio-cyber-physical system. Here, computational trust is of utmost importance to have the CPHS function properly, and explains the necessity of research and external communication in regards to human trust and AI: ‘Building trust in human-centric AI’[14], as the title of this research focal point of the European Commission illustrates.

I believe that ASI will not be a Singleton or Master Algorithm, dwelling on top of the cyber food chain, but an advanced CPHS. Or how I like it to call romantically: a Hyper Connected Computing Cosmos (HCCC). Here, not the human will be in control, but the system. That said, since the human is part of the system, he is not excluded from power, provided that ASI as a HCCC is a system of decentralised and distributed systems.

Sensors will be everywhere. Or to speak with Kurzweil[15]: the whole universe will be saturated with our intelligence: from our cardiovascular system, our brains, nature, manufactured stuff, everything. An important footnote should be made here. This image of ASI assumes that humans will still (partly) exist in a physical world. An alternative is that we all become Uploads, if we can speak about individual humans at all and have not evolved in a single universal consciousness…

I started this article with that we are currently experiencing the development of the AI paradigm. A paradigm is a framework where consensus exists about the meaning of things, events and situations. The validity of the paradigm strengthens itself by the solved problems it raises. Since it is a paradigm, we are included. This means that we are part of the ASI in becoming. Human-centredness from a human standpoint gives the impression that it is all about us. However, when we make a new Kantianlike Copernican turn towards the system, then human-centred means notthat we are the centre of the system, but that its focus is on comprehending this indispensable element: the human, who has not become a rational agent, yet.

[1] My word choice is a literary choice and does not reflect a standpoint in the gender neutrality discussion.

[2] Nick Bostrom: What is a Singleton? (2005)

[3] Pedro Domingos: The Master Algorithm (2015)

[4] Ciano Aydin: The posthuman as hollow idol: A Nietzschean critique of human enhancement (2017)

[5] Shane Legg, Marcus Hutter: A collection of definitions of intelligence (2007); Shane Legg: Machine Super Intelligence (2015)

[6] Mariarosaria Taddeo: Modelling trust in artificial agents, a first step toward the analysis of e-trust. Minds (2010)

[7] Taddeo 2010; Mariarosaria Taddeo and Luciano Floridi: The case for e-trust (2011)

[8] Guy Longworth: Faith in Kant; in Paul Faulkner and Thomas Simpson: The philosophy of trust (2017)

[9] Darwall, Domenicucci and Holton in Faulkner and Simpson 2017

[10] Marvin Minsky: The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind (2006)

[11] Max Planck cited by Martin Heidegger: Zolikon Seminars: Protocols — Conversations — Letters (2001)

[12] Nick Bostrom: Super Intelligence (2014), Yuval Harari: Homo Deus (2015)

[13] Ciano Aydin et al.: Technological Environmentality: Conceptualizing Technology as a Mediating Milieu (2019)

[14] https://www.eesc.europa.eu/en/our-work/opinions-information-reports/opinions/building-trust-human-centric-artificial-intelligence-communication

[15] Ray Kurzweil: The Singularity is near (2005)

--

--

Ida Helena Rust
Institute for Ethics and Emerging Technologies

Expert human-system adaptation in the era of digital transformation / PhD candidate on Critical and Creative Thinking and Artificial Super Intelligence