Future AI Will Be Closest We Can Get To A “Thinking” System

Ayush Jain
CodeX
Published in
5 min readAug 20, 2022

Based on the deep learning models, the current AI systems have fallen short on many fronts. What should be the road ahead?

I have been rather pessimistic in my previous posts about the state of the current AI systems. The results of several studies on data-driven AI systems have been grim. While these systems can show unbelievable accuracy to prompts similar to the trained data, the response to prompts highly different from the training data has been wild.

To take just one instance, in a lecture at MIT, Prof. David Cox gave an example from the paper Wang et al. (2018), where the researchers placed objects like a guitar or bicycle over a monkey and found that the image recognition software recognised the monkey as a person. As Prof. Cox notes, the reason was that the training data didn’t have a monkey with a guitar.

Image from Alexander Amini (Youtube)

The data-driven AI computes a large amount of data to settle its prediction finally. In many cases, it can recognise the object or the text to a high degree of accuracy. However, in many other cases, the prediction goes way off. The solution to this cannot be to feed more and more data into the AI so it is more predictable. We are talking about potentially infinite data sets here.

It seems like we have reached the end of the road. In my previous articles, I even tried to pitch the statistical basis for AI against human intelligence and concluded that AI is far from human-level intelligence. Perhaps this isn’t the right way to go about it.

AI indeed mimics human systems. But this fact should not lead us to believe that a true AI should have human intelligence as its horizon. In discussing the future of AI and the possibility of AI overtaking humans, the author Gary Marcus says, “Humans are a very low bar to surpass.” In calling it a low bar, Marcus refers, for instance, to how fragile our memory is. The storage capacity and computational capabilities of machine systems far exceed us. I feel traditional philosophical methods have spoilt us to think that there is more to us than we can know (for example, when Kant says there are transcendental categories unique to human thought and experience). Therefore, the task is to build systems that can aid humans, not surpass them.

Before deep learning models were developed, symbolic AI systems were in use. Instead of having billions of training images, these systems had labelled objects and logical relations encoded in their database. So, a symbolic AI system can easily do a sentence like ‘human sitting on a table’, which is very difficult for a program like Dall-E 2 to process. This system will scan through its knowledge base, detect the two symbols ‘human’, ‘table’, and the relation ‘on’, and extract an image corresponding to it. Labelling the images confines the infinite datasets into a few finite elements.

But, symbolic AI cannot be a stand-alone system that can perform tasks adequately. It will fail when faced with a prompt outside its knowledge base. Symbolic AI’s limitations is why the neural network model was employed; for the AI to learn and adapt. The neural network differentiates an image (or symbol) into several parts, each embedded into the different layers of the network. In predicting the output, the network statistically measures how much the input data matches the training data and then gives its results. But, as we have seen, adding additional noise to the input image in neural networks, the system fails to predict the required outcome accurately. For example, when we add noise to the image of a panda, the system identifies it as a “gibbon”:

Image from Knowable Magazine

The Road Ahead

In response to the current problem, Gary Marcus has been pushing for a hybrid model — toward a neuro-symbolic AI. There are various theories proposed on the route the hybrid model will take. Here are some outlined by Marcus in his article that appeared in Noema magazine:

People have considered many different ways of combining symbols and neural networks, focusing on techniques such as extracting symbolic rules from neural networks, translating symbolic rules directly into neural networks, constructing intermediate systems that might allow for the transfer of information between neural networks and symbolic systems, and restructuring neural networks themselves.

The hybrid model puts equal emphasis on data and representation. It relies on statistical modelling of data, as well as on the logical, abstract relations between things. The road to a general AI system that can learn how to reason — generalise information, and produce knowledge across domains — is possible through these hybrid models.

The hybrid system is close to how we use human language. When we see an object like an apple, we don’t just see the light reflected from the apple to our eyes, but we see the apple as the apple. Our conceptual capacities play an important role in perception. However, as Gary Marcus also addresses in his article, we don’t know if this conceptual ability is innate in us or acquired along the developmental process. If it is innate, what causes it? And if it is acquired, can machines also learn it? Thus, it isn’t easy to imagine how to design these systems to bring the best output of these hybrid models.

The current trends in neuro-symbolic AI research classify it in multiple ways. Henry Kautz in 2020 presented “five ways to bring together the neural and symbolic traditions” — [symbolic Neuro symbolic], [Symbolic[Neuro]], [Neuro ∪ compile(Symbolic)], [Neuro → Symbolic], and [Neuro[Symbolic]]. There is no reason to be intimidated by this classification. In bringing the neural and symbolic systems together, Kautz presents the number of logical relations possible when assembling the two systems.

So, this brings us to a fascinating question: We can design multiple neuro-symbolic AI systems, each with a particular modality. Are we looking ahead at a time when each AI may have a subjective view of things based on the symbolic extraction of the neural network in their system (similar to how humans interpret things based on their individual experiences)?

--

--