The Cornerstone in Reasoning from Commonsense Phenomena

Athanasios Tsitsipas
omi-uulm
Published in
6 min readMay 10, 2021

The absence of Commonsense Knowledge hinders modern Artificial Intelligence (AI) applications!

Photo by Ekaterina Z. on Unsplash

Modern AI (that includes Machine Learning and Deep Learning) rampages the stage of Information Technology as the medium of intelligence. They made extraordinary progress in natural language processing, search engines, machine translation and speech recognition. Smart replies are some of the recent applications thanks to the explosion in neural networks research. Unfortunately, this innate “explosion” translates to more extensive training data and more enormous compute resources.

AI systems can now beat world champions in Go and chess, looking at names, to frame their advancements as the “Narrow AI”. Such deep neural networks are massive stochastic machines containing mathematical functions that can find complex and valuable correlations between vast amounts of data. Many projects in AI use reinforcement learning, a subset of AI that exhibits iterations on maximising reward. This technique works well in narrow problems and games. However, things start to fall apart when the situation becomes complicated, and a rational agent chooses between conflicting objectives.

Extraordinary leaps in sheer brute force applications, like face recognition or object recognition, are currently looking at predicting breast cancer and composing natural language models. However, a blunt viewpoint is that all these deep learning applications are virtually equivalent to their training data. It is not a problem for “plain” computer vision tasks, such as image classification. Still, deep learning falls behind when it comes to “understanding” hidden meanings in a sentence or knowledge that cannot be taught using many examples.

Another AI Winter in History?

Technology is progressing at the same pace as human ambition. The neural network was a concept from the early 1940s and for almost 25 years. They tried to emulate a biological neuron in our brain with a mathematical function that models its functionality. It was not long before Marvin Minsky, with his writings and books (e.g., “Perceptrons” and “The Society of Mind”) from the late 1960s, became a silent opponent/scepticist of the neural networks. He pointed out the limits of neural networks and that we are far from achieving human intelligence. He was accused that his writings contributed to a bias of the general public, which was eventually followed by the first AI winter in the 1970s. That was also a result of many promises and visions about the so-called Strong AI or Human-like AI. Despite the advancements in Machine Learning and now the buzzword Deep Learning, people do not make predictions for the future AI, not to fall into the pitfall of another AI winter.

Maybe not…

As a modern Marvin Minsky, the AI expert and neuroscientist Gary Marcus openly criticise recent AI advancements and their components with in-depth research papers. His recent book “Rebooting AI” and recent articles found on arxiv.org about the next decade of AI have favoured hybrid approaches towards general intelligence and robust AI. This entails a mixture of the old and the new, the classical and the modern.

The innateness of knowledge, any knowledge (temporal, relational, causal, physical) in a domain, should be geared down to a representation or a cognitive model, making it machine-interpretable and valuable for complex reasoning tasks. Learning is a crucial part but not the whole solution to AI.

Current approaches suffer from generalisation, are data-demanding and are described as black boxes. The vision of solving AI and achieving human-like intelligence has always been the target. It is synonymous with the human ability to make inferences about the everyday world, named Commonsense Reasoning(CR). Automating CR has been and is a great challenge…

Commonsense Reasoning and recent approaches

Since the beginning of the field, the challenge of developing concepts marginal to CR and automating them has been recognised as a central problem in AI. In addition, knowledge about commonsense phenomena (e.g., time, space, physics, situations, people etc.) in the world is generally absent and omitted as it is considered widely known. Examples of Commonsense Knowledge(CK) are plentiful: typically, opening a window lets fresh air come in; sugar is sweet; temperature from the human body does not change the ambient temperature radically in a room; raising the shades during night-time, the room does not get brighter. Many tasks may benefit from CR and Commonsense Knowledge (CK), such as Natural Language Processing, Visual Interpretation (mostly from videos) and Robotics (for obvious reasons).

Acquiring and getting such knowledge in machine-readable form has seen significant effort in AI since its early days. This results in a rich palette of different constructions and curation of large knowledge bases. A few examples are CYC, ATOMIC, FrameNet, WordNet, Visual Genome and others. A recent report enlists the dimensions of Commonsense Knowledge found in the knowledge bases.

Techniques varying from hand-crafted efforts, Web Mining, or Crowdsourcing have been employed to acquire such knowledge. Major knowledge bases (e.g., NELL, KnowItAll, ConceptNet and many others) include exciting facts that record categories, entities, and fundamental relations. These resulting taxonomies (also known as “ontologies” on Semantic Web) are challenging to collect and systematically encode and maintain the knowledge. Representational frameworks that support the encoding of the knowledge employ mostly a form of logic (e.g., Description Logic) that naturally expresses the relations of entities — a widely adopted and applied domain in the Semantic Web.

An extensive corpus of knowledge bases and datasets target Natural Language Processing (NLP) and Computer Vision (CV), which benefit substantially. The most studied field towards the goal of reasoning has been the use of logic. CR is mainly implemented as a sound inference (or approximated) for the employed logic. Nowadays, with the pervasiveness of Deep Learning and a focus on NLP and CV, models like the Transformer gave birth to the two most famous models that built upon it, namely OpenAI’s GPT (1 to 3) and BERT. The models struggle hard even with giant pre-training steps to evaluate against big datasets for natural understanding; for example, SWAG, CODAH, Event2Mind, DROP and many others. On critical examination, the models do not learn anything. They merely memorise and approximate world knowledge with no actual evidence of deep understanding.

Testing for CR

Testing for Commonsense, one could consider passing a Turing test, going back in the 1950s, asking whether behind the given intelligent interface is a machine or a human. In 2012 Levesque et al. developed the so-called “Winograd Schema Challenge” (named after Terry Winograd (1972), who first introduced it) as an alternative to the Turing Test. A comprehensive review, from Kojican et al. (2020), of the potential datasets and the broad range of approaches to tackling the challenge, with performance metrics on the accuracy of the models.

Conclusions

“As long as the dominant approach is focused on narrow AI and bigger and bigger sets of data, the field may be stuck playing whack-a-mole indefinitely, finding short-term data patches for particular problems without ever really addressing the underlying flaws that make these problems so common,” Marcus and Davis write in Rebooting AI.

For the moment, most AI researchers throw in larger datasets and compute resources at the problem, hoping that AI will eventually answer every possible case. It is not a personal viewpoint, but it is a trend that is going mainstream. Stuart Russel and Peter Norvig, authors of the book “Artificial Intelligence: A Modern Approach”, in the Preface of the fourth edition of the book they write: “…We focus more on machine learning rather than hand-crafted knowledge engineering due to the increased availability of data, computing resources and new algorithms…”. It has been evident in the last decade; it has been apparent that experts owe to acknowledge the field and the work that has been done.

Marcus and Davis propose a “..a hybrid, knowledge-driven, reasoning-based approach, centred around cognitive models, that could provide the substrate for a richer, more robust AI than is currently possible”. Harmelen et al. (2008), in “The Handbook of Knowledge Representation”, urges the need to develop powerful reasoning techniques that can deal with the knowledge that is complex, uncertain, and incomplete and that can freely work both top-down and bottom-up.

Nevertheless, the question is: Are we there yet?. The future will surprise us.

Research

In our institute, we are interested in techniques for bridging the gap between sensing and reasoning while considering commonsense phenomena that mainly characterise dynamic application domains. We aim to tackle uncertainty and incompleteness, innate in data streams, via automating commonsense reasoning in sensing environments. The work was partially funded by Germany’s Federal Ministry of Education and Research (BMBF) under HorME.

--

--