Why unimaginably intelligent machines are just around the corner?

Maciej Wolski
9 min readOct 2, 2020
What if we would aim to build whole Artificial Brains instead of just Neural Networks?

AI is currently very, very stupid(…)” — Andrew Moore, VP of AI for Google Cloud

My view is throw it all away and start again” — Geoff Hinton, Google Brain

A few years ago I heard that some math professor does not “believe” in Artificial Intelligence. It almost made me angry, or at least amazed. At that time I was already very excited about the topic. And soon it became my obsession.

Yes, obsession is the right word. I am pretty obsessive and usually don’t stop until I get the satisfying results of my work. Ever tried to imagine h̶u̶n̶d̶r̶e̶d̶s̶ thousands of possible Big Bang scenarios? And how it can be that universe is infinite and expanding at the same time?(Hint: it probably isn’t — but this is a topic for a very different article)

If not, then you may not fully grasp my level of determination.

But how is it possible that the whole world thinks that current AI is something very sophisticated, advanced and… it will vanish ourselves from the surface of the Earth in the next decades — but Google’s Executives, including the “Godfather of Deep Learning” himself and some anonymous math professor are not so impressed?

Well, if you took the effort to understand the basics — you know that current AI is pretty dumb. You need to carefully prepare a dataset for it to learn, you need to optimize it, monitor and supervise. And if something new appears in the environment — you need to train it once again.

It is also so energy consuming and inefficient — that it reminds me the elegance of the first computers that had the size of the room and performance of a table calculator.

I often say that we are in the Stone Age of Machine Learning and Artificial Intelligence. And when we will realize that fact — the growth will happen much faster.

Try to look from the perspective of people living in the year 2100 — will they be impressed with our AI? Probably in the same way how we are excited now by a telegraph.

Sometimes, I also meet IT professionals that are looking at me in a weird way (or with a smile on the face), when I am describing what I do professionally.

At AGICortex I work on the Artificial Brains.

But how? Corporations are investing millions, thousands of people are doing research… It is not possible. “I know people who work in the industry and they say it is impossible”.

Well, I usually refrain from saying directly that I was determined enough to check h̶u̶n̶d̶r̶e̶d̶s̶ thousands of possible ways how to do it better in the future. And ask if they even checked once or just followed other people’s opinions.

Current AI is designed for narrow applications and has very little in common with real intelligence.

Intelligence is about adaptation, real-time changes and varied responses to the same stimuli. It is also about self-organization, gathering and incorporating new knowledge based on observations and experiences.

Ability to deal with unforeseen events, with new challenges.

About autonomy, imagination, planning upfront, communication and patient observation.

The understanding of difference between expected vs unexpected uncertainty and volatility. To recognize when there is a need to learn faster, learn more or what is irrelevant and should be forgotten.

So when I hear about inventors, entrepreneurs and scientists working on digital telepathy, brain-computer interfaces, neural implants or… transferring our consciousness to machines for an immortal life — I often wonder why it seems so impossible to build the Artificial Brain?

Is it really harder?

It always starts with the right direction, a realistic path to your goal. Are we recreating the biological organ here, that needs to not only perform computations but also power and sustain itself?

No.

Are we limited to the space available in the skull?

No.

Do we need to supply a cocktail of chemicals to each neuron or just the electricity?

That is why we do not need to fully understand how everything is done in the human brain. We need to understand what is done.

I don’t know how many people realize that the ambitious plans should be built backward — from the ultimate goal back to the steps necessary to achieve it. Otherwise, it is very easy to hit the dead end.

But I know that leading AI researchers in the world finally agreed to the fact, that what is wrong with Deep Learning now is the supervised learning mode — based on collecting datasets with all the correct answers.

It is hard to abandon ideas that were successful for long in some tasks and much easier to apply it to everything else in hope that it will work too.

But it is not how we will have Artificial General Intelligence.

Yoshua Bengio in a presentation at NeurIPS 2019 conference showed that to progress we need to realize functions similar to both System 1 and System 2 in the brain. He compared them to “Current DL” and “Future DL” respectively.

Even if they are about completely opposite goals and ways of operation.

One is fast, intuitive and the other — slow, analytical.

Ask yourself — can you build the latter from the former? For me, they are like from different worlds.

Much more reasonable seems to be to start with a blank page, analyze what we have already available and what we need to invent.

And we have a large reservoir of tools available: from graphs, decision trees, causal reasoning algorithms to various types of neural networks, including some that were a little bit forgotten.

We also have many clues from neuroscience that are completely not considered in the current AI technology stack. My favorite are the existence of astrocytes that control the neurons but are completely skipped in the AI implementations and multiple subcortical components that allow autonomous operation, not only for us but also animals.

So if we know that the brain does so many different things and we have so many tools available — why not combine them and experiment with their different setups.

That is what I did in the past.

People like to discard things very quickly. If there was a theory of Hierarchical Temporal Memory (HTM) defined by Jeff Hawkins and Numenta researchers — they think that there is no point in further experimenting with architectures inspired by cortical columns in the brain. Even if Geoff Hinton’s capsules also are.

If the model-based learning overcame instance-based techniques — it doesn’t make sense to think seriously about the latter.

And so on.

But the truth is — the modular architecture of the Artificial Brain, composed of uniform learning mechanism based on repeated computational structures, but different feature extractors with various reasoning modes and a group of supporting modules — is probably the only way to realize the goal of advanced general intelligence.

Are you excited by GPT-3 and already waiting for the AGI emergence at one of the next iterations? Try to put it inside a robot and check how much it will learn about its environment after leaving it alone for some time…

Although interesting, the main thing GPT-3 proves is that to get great results in multiple dimensions, you need huge neural architectures.

Would not it be great to have such an enormous amount of neurons and parameters, but use only those that are useful for any given task?

That is what the human brain does to get its amazing efficiency and performance.

You heard about us using only a few percent of our brain at a single moment, right?

But how it knows which parts to pick? Well, there is a dedicated component for that — it is called the thalamus. It is a relay station from the senses to the cortex and between parts of the cortex.

And what is allowing us to make different choices or perform different actions when confronted with similar situations?

First of all, chemical signaling through hormones and neurotransmitters/neuromodulators.

It is as simple as that: you are passing by a restaurant with tasty food — on one day you feel hungry (hunger hormone) and on the other you don’t. That drives different decisions and actions.

I often hear that we are not able to mimic emotions and machines can’t be like us for many reasons. But it seems to me that the concept of machines not being creative has already been sufficiently proven wrong by all these painting/music/poem generators.

It is a similar fear about losing our perceived special position in the world — that kept people believing Earth was the center of the universe for so long.

Well, we are biological machines — living a significant part of our lives on autopilot. Believe it or not, but this is true. It is so common that people think afterward — why did I make such a decision, why did I say that. Because of your autopilot and chemical signaling making decisions for you, unless you will override them.

Fun fact: all spiritual traditions organized around meditation try to show it to you. That there is a very rich life happening under your skull without your conscious awareness. And the moment you fully realize the meaning of these words — you are awakened.

We can consciously override automatic reactions of our brain and body just as we can compute emotions, understanding their composition. And the most recognized emotions are various combinations of neurochemicals: mainly dopamine, noradrenaline and serotonin.

Ever read about research that rats conditioned to believe that electric shock is unavoidable don’t react to the opened door at the labyrinth? Well, blame the associated low level of dopamine activity.

What is the difference between picking a fight or flight option? The current level of noradrenaline, combined with low serotonin and high dopamine.

When realized that all these things are computable — emotions and other features of the mind — it leaves us with the question of what and how to compute?

At AGICortex we are working with 3D neural networks that are very different from their traditional counterparts.

In ANN you knew before — the computational unit is a single neuron. In our case it is a whole group of them — a mini neural network itself, that can be used alone or in combination with the others.

Can you imagine generating Machine Learning models on the fly from a big repository of available components? Impossible?

Not if you have a dedicated module that is aware of all the available units and their characteristics.

Would you like the possibility of real-time incremental learning without the effects of “catastrophic forgetting”? Impossible?

Not with a kind of external memory, deeply incorporated into the neural architecture.

Want to make your AI agent aware of time and past events?

Equip it with a dedicated module, inspired by the biological hippocampus.

And so on.

The intelligence starts at the lowest organizational level. The biological cell is able to sustain itself and adapt own operation to the state of the internal and external environment. Then cells build the specific organs and the whole organisms — from scratch. Again and again.

Maybe then - more complex, adaptable and more capable at the same time artificial neural units are able to push us towards more intelligent AI? Just as quite complex individual cells or cortical columns in the brain?

To sum up, there are two ways to proceed:

1) Believe that the world can be contained in a model. Collect datasets and train the models with backpropagation, making it harder to explain what is happening inside and unable to add new information effortlessly.

2) Build self-learning AI with complex, modular neural architectures with repeated sets of highly capable computational units (this is how our Neocortex is built) — with high-level structure and low-level adaptation to get energy-efficiency, autonomous incremental learning and explainability.

Support them with a range of additional subcortical modules, providing additional possibilities as neural networks are just an associative memory between input and output and not enough to get us to the AGI level.

Then just turn the system on.

Welcome to the age of Artificial Brains.

And it is just the beginning…

--

--

Maciej Wolski

Futurist. Technologist. Independent AGI researcher. Neuroscience enthusiast. Extreme learning machine.