Universe and AI
I read recently that physicists working with Microsoft presented theory that we may live in a Universe that is a self-learning computer. The laws of physics may be changed in time and we may never be able to unify them.
After centuries of mind-bending thoughts, we may re-discover the old truth that even on the most fundamental level the only thing constant… is change. Biological species are evolving, mutating and adapting. Maybe just as the Universe itself.
The concept of reality as a computation (simulation if you prefer) can explain the phenomena related to quantum mechanics or boundless (not infinite, it is not the same) Universe. As long as you treat computation as an information exchange that may happen also outside silicon-based processors.
Infinity is a potential, not something that exists on a material level. You can continue adding as many real numbers as you wish, but the result will never be equal to infinity. In the same way, you may never reach the end of the world but it does not mean it is infinite.
I patiently wait for the results of another work that will show that the Universe is not continuously expanding at all, but it is just an illusion of our limited perception unable to perceive more than 3 dimensions.
See for yourself how 3D shape (left) could look like for a creature able to perceive only in 2D (right). It is expanding and then contracting. You can even see how it would be for a 4D shape in our (as beings used to 3D) case — later in the original video:
All of this just resonates with my lifelong view that we should try to be involved in multiple disciplines if we want to truly understand the big picture. The knowledge should be grown by personal experience and not simply passed or trained.
Combining physics with machine learning — why not? The same with neuroscience, biology and (bio)chemistry. You’ll never know where you will find an idea that will bring the solution to your problem.
In the case of universal attributes, we still have a lot to confirm. But think for a moment what would happen if we would stick to a past dogma of a fixed, mechanistic Universe with the Earth at its center…
The reality is often surprising. And the fate of our civilization is just a constant change (and hopefully progress).
The sequential vs parallel learning
The sequential process of developing science and knowledge has its advantages and drawbacks. Obviously, we don’t need to start all over from scratch but can learn from the results of other people’s work. But passive reading and accepting the information is not the same as personal evaluation of what was shared by the author.
Of course, it would be hard to analyze everything. But if you have decided to learn more about a specific topic and maybe even contribute your work to some discipline — you should double-check if the principal elements of your area of interest still can be considered optimal. Or maybe it is necessary to take two steps back to make five forward.
Parallel learning provides such an ability. You grow your knowledge in multiple dimensions and then you can see, what other people (or at least you) could not before. Just like a 2D creature suddenly exposed to additional ones.
To look from different perspectives, evaluate computer algorithms with an eye of nature’s enthusiast or physics from a machine learning engineer’s point of view. This is a recipe for true learning, that embeds new knowledge deep into your memory instead of plain copy-paste.
In many cases, it makes learning new things easier too. I remember how dramatic progress I made with learning how to design hardware with HDL languages while tinkering with low-level neural network components at the same time.
In short-term perspective it takes time, but with a longer one - it saves it. The broad horizons and practical nature support solving problems faster.
What is the future of Machine Learning?
Neural networks as a concept evolved a lot since their conception in the 40s of the previous century. Right now they are deep, have multiple layers and can do a lot of useful things.
But at this time it is worth asking — do we want a single neural network or highly-capable digital brain with multiple networks? An energy-consuming black-box that in most cases splits out the correct answers, but in many others is terribly wrong? In ways that we would never be.
Do we think that AI should be taught by us or grown by experience? Ingest our knowledge and think like us — or propose new mind-blowing concepts that were too hard to comprehend by us?
Of course, we want to have something better. So is it possible to transform one into another?
Some time ago I encountered the concept of neural-backed decision trees. In this concept, a decision factor of a tree is based on (often quite simple) neural network(s). What this provides is the ability to clearly explain the decision process of AI system — step by step, without cutting off the benefits of raw sensory data processing usually associated with deep networks.
Sounds awesome. But let’s not stop here. Besides having an access to even better and more flexible (very unusual) data structure than a decision tree, my team and I are exploring the concept of disentangled data representations. In contrast to a distributed one, where everything is mixed together — characteristic to most Deep Learning solutions.
As a result, we can incorporate local, incremental changes and eliminate the problems of catastrophic forgetting or model degradation. We can easily transfer knowledge between separate brains — as hypotheses to evaluate, of course. Not as a source of the ultimate truth. The rule of questioning everything applies also to the machines.
We don’t need backpropagation anymore. Just like with liquid neural networks — the solution is able to adapt in real-time to the existing conditions. We don’t need a separate re-training phase. You turn on the machine and it works and learns at the same time.
But as truly ambitious people, for the last several years our goal was to create a digital equivalent of a whole brain. So we don’t only design machine learning algorithms that are transparent/explainable, but also energy-efficient, automated and self-learning.
We equip our algorithms with the ability to manage own goals, simulating real events using imagination and even understand emotions (hello biochemistry!). To induce neural activity without the input data — seeing what could be and not only what is.
I would lie if I’d say it is easy. It was not in any part. It was hard to get initial funding for what we do, it was exhausting to combine neuroscience, biochemistry, machine learning and other disciplines. But the ultimate question is always: what do you aim for?
For 5% improvement or pushing things to another level?
The path to AGI
In 2021, we finally started to develop from scratch a new Machine Learning framework for machines operating in the physical world — based on our unique 3D Neural Networks forming an equivalent of the digital brain. It is nothing like whatever you have ever seen or used before — both in terms of complexity and capacity.
It has equivalents of neurons, glial cells, neocortex & subcortical components, neuromodulators and (currently simple) awareness — able to use high-level structures for combining simple low-level elements into more complex concepts, tasks and goals.
We can’t wait to share the results of our work with the world but to do that we need to spend more time to make it ready to use, also by other people. Tested, improved, documented in easy to understand way.
At AGICortex — we have a realistic path to Artificial General Intelligence and a strategy to realize our goals in the following years.
If you are afraid that it will take over the world — don’t treat movies too seriously. Fear is ignited by a lack of understanding. The AGI will be more transparent than any human mind ever was.
Let’s look forward together to a more positive future. Where we will explore the things that we could not before.
I am sure there is still so much to learn about the Universe and Artificial Intelligence. But this is what makes this journey so interesting.
We should never stop learning and be always open to new perspectives. This way, we will be able to be wrong less often.