Published in

Philosophy and Artificial Intelligence #2 — Intentionality

Credit to ThisIsEngineering @ Pexels

See the previous post here.

Consciousness as an emergent phenomenon

If you spend some time watching termites, bees or ants, you’ll be amazed how complex are the processes they’re going through everyday. Especially ants like Attina, that are farming fungi for food. This is something, isn’t it? And they don’t understand what they are doing. A mass of clueless insects cooperating in projects that amaze us — humans with brains.

Phenomena such as these can lead to a conclusion that what we’re experiencing is just an emergent process of cooperation of billions of clueless cells in our brain (or, for that matter, in our body). In this view, evolution would be much smarter than we are and our consciousness would be just a tool for post hoc rationalization of what was already done. And there would be no essential difference between the human mind and a colony of termites.

However, emergence does not really explain the phenomenon of consciousness and understanding. If the consciousness is just an emergent phenomenon, why do we need to understand things?

Why you need to make everything about something?

The last question was raised by many, including sir Roger Penrose with his quantum theory of consciousness: understanding and consciousness are inherently non-computable — in opposition to whatever termites or ants are doing, even if the results seem as a product of rational reasoning.

This is because mental phenomena are always about something. When a volcano erupts it “erupts about nothing” — there are some processes that led to the eruption, and it just happened. Even if you add to physical phenomena some kind of theological interpretation, the event itself is just as it is. When you think, talk or just look at something — those events are always about something else. German philosopher Franz Brentano called this power of mind intentionality. All human action is intentional — directed towards something.

Meaning, sense and identity

Human mind perceives physical objects and then constructs an abstraction which then organizes the perception. Understanding is the process of making the whole system of abstractions consistent, so our brains can quickly reach for the information when they need one.

Abstractions give us a significant evolutionary advantage. When we were living in the jungle, looking for food and avoiding predators, the analysis of our perceptions could not be based on complex calculations. Our ability to survive depended on how quickly we could make the right decision (mostly whether or not to run away). What we call intelligence today is also based on the ability to think quickly, to recognize patterns in the environment, and to react accordingly to what we perceive and understand.

However, the fundamental abstraction is related to our perception of time and our social environment — it is our identity. We have the feeling that our past selves and our present selves are one and the same person. We can learn from our mistakes and improve our models of reality, because we expect that what happened in the past can happen in the future. And most importantly: we know that our time is limited. We’re all going to grow old and die at some point, as well as everyone we know, including our children in the future.

Social relations, based on empathy and theory of mind, enable us asking the “why” question. We can not only observe things that are going on, but also to create an abstraction of someone else’s point of view — this gives us a perspective to question what we see and look deeper to understand it.

Theory of mind

What a machine can know

Let’s go back to the question about the thinking machinery. Machine learning is very different from the way the human mind learns. We’re getting better at it and every day there are new ML models that need less and less data to make more accurate decisions. But they still operate on syntax without understanding. They don’t need to be conscious to work and to perform some intellectual tasks better than humans — but they don’t have the ability to look for the meaning and ask the why question.

If we would send a machine to an alien planet with different laws of physics or different forms of life not known on Earth, it would probably be able to adapt and survive — however there would be no reason for the machine to ask questions, formulate a scientific theory and understand this new world.

In other words, everything that machine knows is based on human knowledge — even if in the process of machine learning we discover new things. Things discovered that way matter only because there is a human observer behind it.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Olgierd Sroczynski

Olgierd Sroczynski

Ethics | Anthropology | Philosophy of Mind and Language | Data Analysis |