Approachable Programming Part II

Cognitive Applications: Building on Declarative Programming

The Declarative Programming Model: Data At The Center, World Model, Machine Learning & Decision Engine Working Together.

In the paper, Approachable Programming, I have tried to give a glimpse of the Future of Computer Architecture and Programming. The Future will belong to Declarative Programming Models and software and hardware working together for domain-specific solutions.

In this paper, I will discuss models to reach beyond ML further to make programming more approachable.

As important as Machine Learning is, it is just one piece of a broader next generation framework. Part of what makes Machine Learning so powerful is that it is declarative: give it a problem and it will figure out how to solve that problem. That said, Machine Learning solves pattern recognition problems. Next generation applications require more than pattern recognition. What else do we need?

Cognitive Applications will become central in the years ahead: applications that Perceive, Understand, and Decide. The Perception will be driven by Machine Learning. The Understanding will be driven by a Cognitive Database, and the Decisions will be driven by a Declarative Decision Engine. At the center of this new breed of applications will be the emergence of a clear World Model (Ontology) — a map of the world around us. In many ways this World Model — instantiated as a schema — will take us a step closer to representing the world and the relationships between elements in the world as our brains do.

One way to understand Cognitive Applications is to look at the trends already present in the Machine Learning world. One of the most important trends is combining Non-Symbolic Artificial Intelligence (AI) (ML) with Symbolic AI (Data Driven Action Engine) with a World Model.

In DeepMind’s papers explaining Alphago Zero, they explain how ML is moved forward using different types of declarative programming methods. Machine Learning is numerical and fundamentally Non-Symbolic. This particularly manifests itself when we try to understand how ML systems arrive at their recommendations — because the system is not based on a symbol driven world model, but rather a geometric mapping. We have very limited ability to understand the internal decision processes of ML systems because leaving out the symbols makes understanding the steps along the way hard to deduce.

For example, we may use a Non-Symbolic AI system (Computer Vision) using an image of a chess piece to generate a symbolic representation telling us what the chess piece is and where it is on the board or used to understand the current attributes of the board state. This information can then be stored symbolically in the knowledge base and used to make decisions for the AI chess player, similar to Deep Mind’s Alphago Zero.

Alphago Zero uses Symbolic AI, but however, for the most part, generates Non-symbolic representations.

Another way to think about systems like AlphaGo is in terms of their similarity to human reasoning. In the humans, the visual perception system (the visual cortex) operates in a non-symbolic fashion to recognize objects — as a result we have little visibility or control over that system. In parallel the “rational” part of our brain operates entirely symbolically to reason and solve problems based on what the perceptual part provides to it. In the Cognitive Application world the problem solver is the Data Driven Decision Engine. The perceptual and decision-making parts are tied together by the Cognitive Database.

Limitations of Machine Learning and Emergence of new Data Driven Declarative Programming Models

Machine Learning (ML) will start an explosion of product creation in software, as the smartphone supply chain has done it for hardware.

ML = magic pattern/voice/image recognition as cheap commodity to build with

Mobile = magic wireless/sensor/CPU chips as cheap commodities to build with

Fundamentally, ML maps data geometrically to find patterns using deep learning, reinforcement learning and simple regression algorithms.

In simple terms, ML algorithms do vector/ geometric matching between two data sets; input and output.

Deep learning is pattern recognition. Models memorize a data manifold and perform local generalization around training examples.

In supervised learning, humans feed in labeled data. There is significant investment to supervised learning and it is time intensive. There are very dictatorial/centralized aspects to supervised learning. Once trained, those algorithms could be embedded in tiny and commoditized chipsets to run and deliver services.

Reinforcement learning is done by trial and error. Programmers hardcode rules, which represent mental models/world models, which are defined by a Schema, representing relationships between entities. Deepmind’s Alphago Zero is a perfect example of reinforcement learning. In some cases self-driving cars use this approach too (eg. Waymo). With this approach the machine learns the rules on its own and understands how to win a game. This is not a chaotic environment; it is driven by a simple set of rules defining how each game operates. For example, by dying in a game a million times, the machine learns different strategies and tactics for playing the game. Deepmind’s research in setting up neural networks that play Go game ended up with an interesting paper on reinforcement learning as an alternative to supervised learning and letting the machine learn the game rules by trial and error.

With all its glory and usefulness, there are some limitations to deep learning.

Machine learning lacks reasoning and abstraction. It lacks “mental models” or what we call a “World model”. In machine learning the machine only learns according to the dataset. One needs to cover all the cases and all the different situations. Data and the depth of data become critical in this case. For example, if one trains how to drive in the US, for UK, the training starts from scratch. It lacks generalization.

In a similar way a deep neural network may learn to recognize tumors in an MRI scan better than a human, but the same neural network will have no concept of a patient, a treatment protocol, family history or a myriad of other key factors all of which represent a complete picture of the real world. Similarly, although the neural network will have perceived the tumor, it will not have any process for sorting through the many treatment alternatives, the financial situation, the availability of staff and equipment, religious biases, and all the other factors that go into the decision process that start as soon as the perception is complete. It is the ability to start with Perception, then move on to building a World Model, and then sorting through the Decision process that represents the complete Cognitive Application.

We humans use reasoning, abstraction, planning to drive “what if” questions that simply don’t exist in any deep learning models.

For example let’s assume the task is avoiding getting hit by a car.

Deep learning:

- learn point by point mapping between sensorimotor space and vital outcome

- die millions of times

- need to re-learn most of it in a new environment


- learn from others (imitation, spoken instructions, )

- model the world in an abstract way( eg. understand physics to predict collision in a new context)

- Die 0 times

Adding more layers to machine learning algorithms will not get us closer to dealing with real world problems in a complete way. All those layers may lead to more accurate Perception, but we need Understanding and Decision Making to get all the way there. Abstraction, reasoning and planning: these represent the next frontiers; the key to unlocking the promise that ML points us toward.

A Complete Declarative Platform

In the previous paper, I’ve talked about the powerful promise of Declarative Systems. In the past ten years an entirely new class of applications has been created based on ML as a declarative engine. For example, no matter how many millions of lines of code we wrote, neither face recognition, nor translation of human languages worked very well. Today by tapping into the new declarative ML world both of these problems are largely solved. Can we apply the same power and promise of Declarative to Understanding and Deciding? That is what the Cognitive Application system does.

Abstraction is about being able to take the perceptual results of the ML engines and building a model of the world around us. That model includes other cars, patients, treatments, orders, inventory, economic measures and more. That is what a database does and we can harness the power of the declarative query processor to bring it all to life.

Reasoning and planning is about being able to build Policy Bases. A policy base encapsulates the deep knowledge of Domain Experts into a declarative framework that allows the Cognitive Application to sort through complex situations and make increasing powerful decisions that lead to meaning plans.

Three Declarative Engines — Machine Learning, Cognitive Database, and Decision Engine — that moves us into a new world with perception, abstraction and reasoning. For the first time new applications will see, understand and act. That is the Cognitive Application World.

Putting All the Pieces Together

First, let’s be clear about where we are today. Game Playing and some car driving systems combine very sophisticated ML with handcrafted symbolic modules that are completely hardcoded and written by hand. Whether a self-driving car uses Lidar or just radar and cameras, that only solves the “perception” problem. The decisions about when to change lanes, which objects to avoid, when to speed up or slow down, are all based on millions of lines of hand written (C or C++) code. No wonder nobody can confidently predict when cars will truly drive themselves in the real world. And, that is the opportunity!

Just as ML revolutionized Perception with a declarative engine, so the same needs to happen for Understanding and Decision Making. Developing databases specifically designed to live in this world — Cognitive Databases — is part of the answer. This means harnessing the declarative power we are already familiar with in the database world in somewhat new ways. To complete the picture Symbolic AI in the form of Decision Engines that bring declarative power to sets of rules — policy bases — makes the picture complete. For the first time, geometric and symbolic come together in the new declarative world.