Introspective Reasoning within the OpenCog Framework

How SingularityNET leverages the OpenCog Framework to bridge both the human and machine mind, and addresses the challenges of effective reasoning.

Nil Geisweiller
SingularityNET
8 min readJun 15, 2018

--

The problem with introspection is that it has no end.

- Philip K. Dick

With inference, comes exponential growth.

Multiple choices must be made to grow an inference, and the number of choices compound at every step, resulting in an exponential growth that must be properly controlled.

Therefore, learning how to reason effectively is an essential piece of the puzzle that is Artificial General Intelligence (AGI). In this post, I will provide a high-level presentation of our work at the OpenCog Foundation where we confront this challenge.

Why Reasoning?

Reasoning can be an effective method for solving all kinds of problems: proving mathematical theorems, performing common sense reasoning, making predictions, taking decisions with insufficient data, creating high level abstractions and so on.

Generally speaking, an automated theorem prover is not too different from a program learner.

An automated theorem prover evolves or constructs proofs in some fashion until it fulfills some criteria — like drawing a chain of inferences from some premises to form a conclusion.

Similarly, a program learner evolves or construct programs in some fashion until it fulfills some criteria — like maximizing a fitness function such as fitting a table of data.

The relationship goes even further due to a well-known isomorphism between proofs and programs, the Curry-Howard correspondence, which essentially states that programs and proofs can be seen as structurally identical.

Therefore, it is not unreasonable to wonder: if reasoning is just another form of computing, and proofs are programs in disguise, why bother with reasoning in the first place? Why not just evolve programs instead of dealing with the intricacies of logic?

The answer may not be simple and is probably debatable. I hold the opinion that reasoning is the pragmatic bridge between the machine and the human mind.

Let me give an example.

Let’s assume that we want to learn a program explaining some data, given say a mapping from inputs to outputs

x1y1

xnyn

with the constraint that such program must be a linear function

f(x1 + x2) = f(x1) + f(x2)
f(c × x) = c × f(x)

Now, we have two options:

  1. Use a program learner with a fitness rating if the program agrees with the data, i.e., P(xi)=yi for all i, and checking whether P is a linear function.
  2. Use a prover to construct proofs of theorems expressing that P agrees with the data and is linear.

Both would give us the desired outcomes.

In fact, the program learner might be more efficient — requiring a fitness function with a linearity checker that may or may not look like a form of reasoning; or the search algorithm could be optimized to, for instance, only search the space of linear functions, etc.

However, and I think this is crucial, what the prover offers is an explanation: why such a model fit data sets and if so why would it be a linear function. Also, the explanation is not buried in the code and in its run-time execution — it is laid out explicitly as an inference chain.

Here’s another example. Let’s say you want to calculate 2+3, again — you may have two options:

  1. Write 2+3 in the programming language of your choice.
  2. Axiomatize numbers and addition and run a prover over the query 2+3=?.

In option one the program will be turned into machine code, numbers will be sent to the microprocessor which will return the answer: 5.

In the second option, the prover will construct a proof (an inference tree) like:

formally representing

2+3 = 1+4 = 0+5 = 5

The first option is more expedient. However, what is important is that the second option is more transparent: it provides not only the answer but an explanation for it. It seems that the more computation that takes place on the reasoning side, the more transparent it is.

Perhaps just for the human and not for the machine.

As the ultimate seeder of the machine intelligence, this puts us in the best position to help. And I would argue that it includes enabling the machine to introspect itself.

Therefore the increased transparency for the human, in a way, allows the human to transfer that level of transparency to the machine.

I hope that gives you an idea of why, I believe, reasoning is so fundamental to creating an Artificial General Intelligence — if not in principle, then certainly in practice.

Now, let us look at the work on Inference Control Learning within the OpenCog framework.

OpenCog

First, let me recall some critical components of OpenCog.

The primary component is a graph-based database, called the AtomSpace. Atomspace is a Hyper-Meta-Graph. Hyper because edges can have more than two vertices and Meta because vertices can themselves be graphs.

In OpenCog lingo: a link is an edge, a node is a vertex, and an atom is either a link or node — often meant as being an entire subgraph with the atom as root. Links are directed. The afferent ones are called incoming links, and the efferent ones outgoing links.

Atoms are labeled by types, nodes by names. Also, the atoms can be assigned values (numbers as well as structural data).

All of this not only allows to create networks of symbolic knowledge intermixed with non-symbolic knowledge stored as values — like probabilities and such for handling uncertainty — but also to possess a second network layered on top for managing attentional focus.

Here’s an example of an atom (subgraph) expressing the probability of raining tomorrow:

where 0.6 indicates the probability of such event, and 0.5 the confidence over that probability. Words like AtTime and Evaluation, denote atom types, ”raining” and ”here” denote node names.

For the critically minded it is worth mentioning that node names are for human comprehension only. OpenCog does not need to assign meaning to them as the meaning comes from the network itself.

Multiple processes can operate over the same atomspace: creating, modifying or destroying knowledge. Examples of such processes are: MOSES, a program learner, URE, a reasoning engine (explained further below), ECAN, an attention allocation manager, and more.

Unified Rule Engine

The Unified Rule Engine (URE) is the principle tool for reasoning in OpenCog.

It is called such because it does not make any commitment on a particular logic, and at the heart of it, it is just a program that given a rule-base/logic, glues rules together to construct inferences.

Its main rule base is PLN (Probabilistic Logic Network), but it is also used with other logics — examples of which include R2L (Relex2Logic) for natural language comprehension, and a home-brewed logic for discovering frequent patterns within an AtomSpace.

It is important to note that the URE can not only prove theorems but can also answer pattern matching type queries — connecting the dots and filling the blanks as needed.

For instance: given a description of what a linear function is, as well as a description of the operators of some programming language, you may submit a query like

and the URE will start constructing such programs, finding the terms that may substitute X, alongside the proofs that these terms are indeed linear programs.

URE is no doubt a powerful tool, but like any tool of that nature it suffers from the combinatorial explosion: the search space grows exponentially w.r.t the proof size. This comes from the multiple choices that must be made to grow an inference.

For instance, if an inference tree has 5 premises and 3 applicable rules — a backward expansion would result into 15 choices, for just one step. The number of choices compounds at each step, thus the exponential growth. In order to be tractable it is necessary to properly control that growth, which is what is called the inference control problem.

In its full generality, it is an unsolvable problem.

However, in practice, we are only interested in “real world” problems (real in a broad sense, including the usual mathematics, etc.). Moreover, these problems tend to repeat themselves or display some levels of similarities — albeit sometimes very abstract. This gives us a chance to incrementally learn from them and apply that accumulated wisdom to new problems.

Addressing the Inference Control Problem

To address the inference control problem the idea is to feed the URE with inference problems and to record its every step as it grows the inferences.

Then we store these steps in an AtomSpace — label them as successful versus unsuccessful — and attempt to discover rules to discriminate between them. Eventually we use these rules as guidance to control the URE for future inference problems.

The interesting part is that these rules are themselves atoms (knowledge living in the AtomSpace), we can thus rely on the whole OpenCog intelligence to discover them, including reasoning using the URE itself.

The question is, can we transfer some of the capabilities at solving real-world problems to the meta-problem of improving inference control?

If the answer is yes; it means that as OpenCog gets better at solving problems, it automatically gets better at getting better at solving problems.

Whether or not that is the case is an open question that we hope to begin to answer in the foreseeable future.

So far, what we have is:

  1. URE can use control rules to guide inference growth.
  2. Embryonic experiments with a simple toy problem (reasoning about the transitivity of the alphabetic order).
  3. OpenCog (actually the URE itself) can learn context free control rules to speed up that toy problem.
  4. We can almost use the pattern miner (itself an URE process) to discover context sensitive rules to further improve the inference control over that same toy problem.

Further Reading

I hope this post sparked some curiosity on your part.

If you wish to read more on the subject, I suggest this post on inference meta learning — where I describe in more detail the toy meta-learning inference control experiment and also give some relevant references (which are missing in this post).

And I hope to connect with you again, in the not-too-distant future, with more as further progress is made on learning context sensitive control rules and ultimately applying that methodology to real-world problems.

How Can You Get Involved?

Be sure to visit our Community Forum to chat about the research mentioned in the post. Over the coming weeks, we’ll be bringing you more insider access to SingularityNET’s groundbreaking AI research, as well as detailed the specifics of our development. Please refer to our roadmaps for additional information and subscribe to our newsletter via our website.

--

--

Nil Geisweiller
SingularityNET

OpenCog Foundation, SingularityNET Foundation developer and researcher.