Don’t mix Agency with Intelligence
There is a widespread opinion on the importance of agency for any kind of intelligence. There are many flavors of it: embodiment intelligence, active learning, active inference, goal-oriented learning, orientation on the support of its homeostasis, focus on problem-solving, etc. It’s an important concept, as all known examples of natural intelligence are closely tight with their agency.
But there is one thing: the typical inference from this observation is that the agency itself is an integral part of intelligence. It leads to attempts to create an AI with the agency as the fundamental principle. It’s wrong. Agency and intelligence have to be regarded as two separate components, the same way as it’s typically done for the environment-agent dichotomy.
To prove it, it’s enough to conduct the next thought experiment. Take any natural or artificial intelligent agent and keep the environment and agent part and change intelligence for another one. For example, take a real fish and substitute its brain for an artificial surrogate with a hard-coded complete set of fish’s reactions. Also, do the opposite: place the fish’s brain to the artificial body with the same set of sensors and artificial muscles. Then release both in the same environment (please, ignore for a moment the second-order details like the problem of transformation of the typical fish’s food to the same amount of energy for the artificial body).
In both cases, there aren’t any fundamental reasons why such a swap would be impossible. Agency and intelligence are two different things, even though they typically work together.
By the way, the same logic is applicable for the reinforcement learning in its narrow meaning as an ML term, where a reinforcing component is responsible for the agent behavior, but the intelligence is represented by another model (or a set of them) and can be substituted, in some cases, without changing the resulting behavior.
To make it even more clear, there is another thought experiment. Let’s imagine a minimum intelligent agent to show that fundamentally, any agency’s active properties aren’t required to preserve intelligent processing, so they aren’t components of intelligence.
The minimum agent has the next set of properties:
- It’s energy independent.
- It not repairable.
- It’s not capable of any external behavior.
- It has diverse and non-discriminating sensory input.
- It has computational power higher than needed for its activity.
- It can build a model of the world.
- Its algorithm has reflective nature: it automatically processes any new input, updating the current model.
Such an agent would build and constantly update the representation of the world without any need for any additional properties. Thus, it will produce an intelligent result, which means it has intelligence and no need for any active properties of the agency.