Deep Knowledge and Deep Learning

Deep learning nowadays is the “buzz word”: on my first postdoc, I have found out how deep learning stolen the scene in a matter of years, since I last worked directly with machine learning; I left for a while for working with white-box models in mathematical physiology, appetite control. More or less on the same time deep learning was leaving the underworld, we had spiking neural network; I came across that model by professor Kasabov.

Deep learning is a set of artificial neural network. Being straight to the point: it is huge number of hidden layers on a multilayered perceptron (MLP). Deep stands for opposition to shallow neural network, the predominant model until about 2012.

What does make those techniques (i.e., SNN) so different from what we already have and what may set them apart on future applications?

Spatio-temporal representations: the world is a stochastic partial differential equation, coded in your brain

Photo by Dmitry Ratushny on Unsplash

One of the biggest challenges of neural networks, as AI in general, is spatio-temporal computational thinking. Just Google about image segmentation being deceived by photos, you will find from Microsoft to Google itself. Simple models of image/video segmentation are unable to see depth, the difference between a photo and a video.

Spatio-temporal is when a problem cannot be explained properly just using spatial approximation, time is essential to be considered. And, unfortunately, or fortunately, most of the problems in the real-world are spatio-temporal: this is the difference between a car coming at your at high speed and a car going away from you in low speed.

Our brain is spatio-temporal: your eyes does not stop as you see; should you stop the eyeball, “your brain will go blind”, since it needs to have 3D-spatio-temporal information. You see in a spatio-temporal manner. So why should we require from NN to not use spatio-temporal information?

One solution I have seen on my first postdoc was using infrared sensors, on game-applied image recognition.

Nonetheless, it poses an additional challenge: even if we could have those spatio-temporal information, can we process it as the brain can do? it seems that not, and SNN could be a way to handle it smartly.

One common reflection is that what a bat can do with his sonar could make any high-tech looks bad!

“Bats navigate and find insect prey using echolocation.” Source

Temporal patterns

Photo by Jan Huber on Unsplash

What called my attention the most when studying spiking neural network is that our brain is quite simple on the general behavior (not talking about the details before firing, or any possible trigglers), as are the neurons, just “firing machines”: it creates a set of temporal-series of spikes. And, amazingly enough, it has a considerable amount of information. Just think about the computers, it is just 0 and 1, why should not the brain be able to code information in a simple way? Remember, we evolved for millions of years, and efficiency is a concern on any evolutionary process. Of course, we are not optimized, but the brain evolved to “see the world” the best way possible for survival.

We may encode the brain: talking the brain language

A computer language is a just a way to translate to machines what humans can understand: computers are just 0 and 1, believe or not! All the miracles from The Matrix are 0s and 1s. Neo is just a set of 0s and 0s. Remember the scene he saves Trinity: it becomes 0s and 1s as the scene happens, this is the underworld of computers!

Photo by Maksim Istomin on Unsplash

Coding is hard: we may be able to speak computer language in the future, as Neo did!

Recently I have created an online discussion, on a Java programming group. I said: coding is hard!

On my best days!😂🤣

Even though initially people were shy, and disagreed with me; the next ones started to have courage and accepted my assertion. I have seen those patterns on online discussion several places, just be patient!

Why do I think so?

Consider writing a Word text. It is quite straightforward, if you know how to write and have enough practice.

What about computers?

Even after years coding, it is still hard to me! And I say it with no fear to sound stupid inside the programming community.

Languages come and go, from Fortran to Python, from Java to MATLAB, they all have strong and weak points. Now, they seem in a paradox: “specificity vs. generality”.

Brain-Inspired Artificial Intelligence, based on SNN, may be a way to communicate with the machines directly.

How deep is deep learning?

“And so after reading about how the old idea of artificial neural networks, recently adopted by a branch of Google called Google Brain and now enhanced by “deep learning,” has resulted in a new kind of software that has allegedly revolutionized machine translation, I decided I had to check out the latest incarnation of Google Translate. Was it a game changer, as Deep Blue and AlphaGo were for the venerable games of chess and Go?” Shallowness of Google Translate

Kasparov cries his tears out years after his legendary defeat against deep blue: the last stand of humans!

Deep learning itself is not that deep as the name may suggest; i.e., The Shallowness of Google Translate. This may lead to misleading generalizations, and expectations that are machine learning is finally human-like.

Photo by Andy Kelly on Unsplash

Two negative traits from deep learning may indicate misuse of the term:

  • Their sense of understanding is limited, Shallowness of Google Translate presents a couple of examples; I myself shall present some on my upcoming book, nothing different from the example given on the aforementioned article;
  • They are just hidden layers, packed properly and optimized for learning. The deep should be understood as deep in the sense of “feedback loops”; not sure we have those loops in artificial deep learning as it is today. Also, deep could be understood as “understanding”.

How deep are biological networks?

“We have cluster of neurons that connect between each other in a self-organizing way and they make a deep trajectory how deep? well if you take one neuron it will be millions of neurons deep. But if you take generations of clusters, it will be hundreds clusters deep…we do not have to specify how deep, it just happens”

Biological deep neural networks

How deep is deep knowledge?

Deep kowledge can be seen as a way to solve some problems nowadays with AI understanding: some call it “break a leg phenomenon”, term used by Kahneman and colleagues; if you break a leg, you will not go to the cinema, we humans know why people with broken leg will not go to cinemas, whereas for AI models, it is hard for them, even though they may predict properly.

“In order to understand the principles of deep learning and deep knowledge, SNN and BI-AI, to properly apply them to solve problems, one needs to know some basic science principles established in the past, such as epistemology by Aristotle, perceptron by Rosenblatt, multilayer perceptron by Rumelhart, Amari, Werbos and others, self-organising maps by Kohonen, fuzzy logic by Zadeh, quantum principles by Einstein and Rutherford, von Neumann computing and Atanassoff ABC machine and of course the human brain.” Nikola Kasabov on brain-inspired AI, deep knowledge and deep learning, (preface)

“Artificial general intelligence (AGI) is the ability of an intelligent agent to understand or learn any intellectual task that a human being can.[1][2] It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI,[3][4][5] full AI,[6] or general intelligent action,[7] although some academic sources reserve the term “strong AI” for computer programs that experience sentience or consciousness.[a]Wikipedia

Final remarks

Some people dare to make predictions about AI, I would never! The boundaries are being challenged quite often, and for bad or for good, machines may represent the future. Job will be lost, but jobs will be created. When defended, Kasparov asked: what about if we work together? this is may be the way of Brain-Inspired Artificial Intelligence, “symbiotic and collaborative work” as Nikola Kasabov puts it nicely.

Photo by Possessed Photography on Unsplash

Feedback from readers

Acknowledgement: I am in great debt with prof. Kasabov, for our discussion online, which inspired this story!😎😍

See his book on the official publisher website

Our online discussion with prof. Kasabov!😎😍😁

After publication notes

--

--