What is intelligence?

Michael V.
9 min readDec 14, 2018

--

Many people wonder this question. There are many theories with various complex concepts, from magic to quantum physics. What if everything is easier? In this article there are thoughts about natural and artificial intelligence, and also a definition of what is intelligence.

A conceptual model is a representation of a system, made of the composition of concepts which are used to help people know, understand, or simulate a subject the model represents. It is also a set of concepts. Some models are physical objects; for example, a toy model which may be assembled, and may be made to work like the object it represents.

Wiki

Model — a physical, mathematical, or otherwise logical representation of a system, entity, phenomenon, or process

SEBoK Wiki

A model imitates original system with some accuracy. The more accuracy, the more constituent objects a model will have. Information model has information objects. With maximum accuracy each modelled object will have related information object. So there will be one-to-one mapping. Further, by the word “model” I will mean such mapping, the specified match of real and informational.

Information system — a system for information processing. Information is input into information system in some way. To process it correclty, a system has an information model.

Where does the information come from? From some external source. It can be called “reality” or “environment”. I will use the word “reality”.

Reality — all that is outside the information system.

So, to interact with reality, an information system should have some model of this reality. For each real object there should be related information one. Another information system is also a real object. Accordingly, if we want to interact with it somehow, we also need a model for it.

In programming, the need of this model is well noticeable in API development. There is some structure on server, there is similar structure on client. Same classes in PHP and JS. This is not a redundancy, this is a modeling of one system in another.

Intelligence

Natural intelligence is also an information system. It has sensors that give input information. By this information it builds a model of what is going on outside the system, and by this model it makes decisions about actions. So, we can give a definition of intelligence.

Intelligence — an ability of an information system to build a model of reality by input information.

Computer program is also an information system. Does it have intelligence? No, because model is put into program by a progammer.
Consider a program for hard disk partitioning. Information about possible formats, order of commands, response codes — all is described in source code. The program has the model, updates its state by command responses, but does not build it, cannot change its structure. The thing is not in presence of model but in mechanisms of building it by input information.

Artificial intelligence should build correct model of reality. In particular, it should correctly react to it, at least as good as a human. First, it’s a feedback for us, by which we can find out if the system has intelligence. We can check this in debugger, but we need to know what to look for there. Second, because one of the goals is performing human activity in some tasks.

All animals react to reality in some way. So, we cannot say if they have an intelligence or not. We can say if there is more or less of intelligence, is it enough for certain tasks, for building certain models.

Single-celled organisms don’t have an intelligence, because there is no elements which store and handle information about environment. Their construction is set genetically, they appear with ready model and cannot change it. Their ability to change internal state according to external changes is similar to that one in programs with a model built-in from outside. In program, variable values can change but characteristics the variables correspond to are always the same.

It is also worth noting that intelligence is closely related to the concepts of object and memory. Object is something that has state and behavior, that we determine as the same at different moments of time. Memory is needed to store object features between these moments, here the model starts.

Senses

Sense — a process of perception of the data that come in to information system. Perception is how the data affects the model. The system can perceive the presense of data as positive or negative feedback. It seeks to increase positive feedback and decrease negative one.

Obviously, perception of any signals from sensors is a sense.

Can AI feel pain?

Note first that natural intelligence experience several different feelings. Pain is the name for one of them. It signals about damage. It is unpleasant feeling, if it presents a brain always pays attention to it. Usually a human cannot feel or not fill pain at will. And people have several unpleasant feelings, e.g. there is a fear.

So, if an information system with intelligence gets a feel with information about damage that it determines as unpleasant for it, seeks to minimize its impact and cannot do it by just a cutting-off, then this is analog of pain. Namely analog, we can just determine a similarity, that “Feeling 1” is similar to human pain and “Feeling 2” is similar to pleasure. The main thing here is how the system itself reacts to this data. How to really determine it is not lying? Check in debugger.

The more features, the more similarity, including chemical and physical processes. We think that with high probability animals have feelings similar to human ones, because they react like humans, also have nerve cells that also consist of organic compounds.

That means if we make information model of human to within the atoms and electic fields and simulate a cut on the finger then yes, it will feel the pain. Because by information handling he will not differ from other people. More likely, this will not be ethical to force him to feel pain. But we can simulate another level of reaction or another feeling like fear.

Consider a database management system with monitoring. If a failure occurs the monitoring subsystem starts with some frequency sending notifications with alert level to system administrator. Can we say it feels pain? No. First, it does not have an intelligence, the model is built-in by a programmer. Second, although the signal is rated as unpleasant, the system does not seek to minimize it. Actually quite the opposite, it is intended to provide exact model of what is going on with the system. It can be called an analog of receptors of damage, but not a being which has them. Notifications are kind of data from receptors. But yes, it looks like feeling, and we can say in informal speech the phrases like ‘the server feels bad’.

Consider a bot with AI in computer game. Game is a reality for it, though rather limited. For correct actions the information about damage is required. If the bot gets it at will — performed some actions, paid attention to wound, check signal parameters, distracted and continued actions — then this is not an analog of pain. If the signal does not affect other actions for other goals (like the bot does not want to withdraw its hand if it got a hit by a wound on hand) then also this is not.

Other concepts

What is consciousness? It can be defined as an intelligence functioning process, process of obtaining information an updating model of reality. It is closely realted to the concept of “I”. “I” is the model element representing the system itself. But this does not mean an infinite recursion, like ‘the system has “I” element, “I” element means the system, therefore “I” element contains a model of “I” element’. No. “Model” does not mean a detailed internal structure, “model” is a mapping, when one object is used instead of other object in information processes. There is the “Start” button on computer with Windows, there is the information object “Computer” there, that does not mean the infinity appeared there.

Does dogs have the concept of “I”? Yes, this is the object with which they associate the commands like “Sit-Down”. That means the dogs have consciousness and intelligence. But for example, they can’t multiply. Perhaps this is because higher levels of abstrations require additional information elements that will watch lower levels and build a model by their data (or of their data).

Consciousness is related to self-observation. That is, it’s not just a receiving of information, but also a registration of the facts of receiving, their reflection in the model, link with object “I”. It is main part of consciousness. “If they poke me under the ribs, it tickles me”. “Yesterday I didn’t know it, today I know”. That very thing “I think, therefore I am”. It results a system with a loop, some elements from output of analyzing block is connected to its input. It can be assumed that only complex beings can have a consciousness, those who have enough information elements to organize such link.

What is understanding? What does it mean when we say that a person understood something? That means he got a correct model of what he analyzed. Information elements match the elements of real system. Understanding is a building of correct model.

Why is experince so important? It’s a model. It allows to model processes in informational form, and therefore to plan and select suitable variant.

What is curiosity? It’s a desire to know details of some model element. Model can have atomic objects on any level, that means we just don’t know their structure. We can see a cave from outside and know that it’s a cave but not to know anything else. “Do you want to know what is inside?”

Update: Important moment. Objects in a model are exactly the objects, with state and behavior, which are recognized as the same at different times. I think if an information system does not have a mechanism for concept of “the same”, it cannot be said it has an intelligence.

Chinese room

Let’s try to analyze the Chinese room according to that.

There is a room, a human sits inside, he have an instruction guided by which he moves chinese characters around. The room answers chinese questions but the human dows not know Chinese. The answer is simple and is already given — it is the “whole system” that understands Chinese. But there are the arguments against. Consider some moments.

It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output.

Strictly speaking, this is impossible, because the answer requires additional data that is not in question. If the room really answers correctly, it is stored somewhere — in database (filing cabinets) or in instructions themselves. This is its model of reality.

Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test.
The question Searle wants to answer is this: does the machine literally “understand” Chinese?

The correct versions should be:
“this program performs its task so convincingly that it comfortably passes the Turing test”
“does the program literally “understand” Chinese?”
Because it has the model and handles input information.

Each simply follows a program, step-by-step, producing a behavior which is then interpreted by the user as demonstrating intelligent conversation. However, Searle himself would not be able to understand the conversation. Therefore, he argues, it follows that the computer would not be able to understand the conversation either.

The computer would not be able to understand, the program would. Interpreting system just transfers data to the program, the program handles them.
To make the human to understand Chinese, his model should be extended by the model from program. When these data are inside the virtual machine, they are unavailable for use.

Nobody supposes that the computational model of rainstorms in London will leave us all wet. But they make the mistake of supposing that the computational model of consciousness is somehow conscious.

Consciousness is the computational model itself. More correctly, it’s a part of process that makes the model. So, similar process running on computer will have consciousness.
Consciousness and mind are about information processing. Computational model of rainstorm will not leave us all wet but it will show us a nice picture similar to photo of real rainstorm.

(A1) Programs are formal (syntactic)
(C1) Programs are neither constitutive of nor sufficient for minds.
This should follow without controversy from the first three: Programs don’t have semantics. Programs have only syntax, and syntax is insufficient for semantics. Every mind has semantics. Therefore no programs are minds.

Programs are formal on low level — memory cells and machine commands. But a program can have semantics in memory cells and use machine commands to get the meaning of input information. Chinese room which always answers correctly is not formal in “syntactic” meaning. Because symbols in question are not enough to always give correct answer. So (C1) is incorrect.

(A3) Syntax by itself is neither constitutive of nor sufficient for semantics…

Right. Understading is a building of correct model, syntax constructions just refer to model elements.

Searle then responds by simplifying this list of physical objects: he asks what happens if the man memorizes the rules and keeps track of everything in his head? Then the whole system consists of just one object: the man himself.

And when he will execute the program, the program will understand Chinese.

--

--