PyReason: Software for Open World Temporal Logic

How does PyReason facilitate NeSy paradigm?

Dyuman Aditya
11 min readApr 5, 2023

NOTE: The code in this article is now out of date. Please refer this updated code and instructions if you try to run the example.

In this post I’ll go over our recent work on a software called PyReason. For detailed description and Github repository, please check out the links below,

Paper: https://arxiv.org/abs/2302.13482

Code: https://github.com/lab-v2/pyreason

Content

  1. Background
    a. Neuro-Symbolic Frameworks
    b. Predicates, Atoms, Annotations and Interpretations in a Graph case
    c. Reasoning on Graphical Structures
  2. PyReason
    a. Features
  3. Using PyReason on an Example
    a. Input Files for PyReason
    b. Generating the Graph
    c. Creating Labels
    d. Creating Facts
    e. Creating Rules
    f. Running the Program
  4. Conclusion

Background

This post contains concepts from predicate and first order logic — if you need a refresher on these topics, this blog post covers some of the basics.

Neuro-Symbolic Frameworks

Deep neural networks have been a core part of some of the biggest AI breakthroughs in recent years. There are, however, several critical issues with deep neural networks such as the lack of interpretability, the inability to reason with common sense and large amounts of data required for learning.

Neuro-Symbolic AI aims to bridge these gaps in deep learning by integrating symbolic logic with neural networks. In such systems, there are generally, as you might have guessed, two main components:

  1. A logical component/framework
  2. A neural component/framework

There have been several neuro-symbolic systems introduced recently such as: Logical Neural Networks [1] (LNNs) and Logical Tensor Networks [2](LTNs). Each of these systems have their own logical frameworks to perform deduction and reasoning; with all of them utilizing different features and functionalities. For example, the logic framework in LNNs uses open world reasoning (the assumption that logical statements start out as uncertain instead of false) and the logic framework in LTNs uses fuzzy, real-valued logic.

Most of the time it is necessary to make new custom software for the specific logic used.

Predicates, Atoms, Annotations and Interpretations in a Graph case

Consider the following graph for illustration:

Graph of friends and their pets

A predicate is an expression with one or two variables (in this case). The graph above contains both unary (node labels) as well as binary (edge labels) predicates.

Pet(x) is an example of a unary predicate
Friends(x1, x2) is an example of a binary predicate

x, x1 and x2 are all variables that can be replaced by individual nodes.

As soon as the variables in the predicates are replaced by specific nodes, they’re called atoms. This process is called grounding. For example: Pet(Dog) and Friends(John, Mary) are ground atoms.

Annotations are real values or intervals. For example to represent the real number 0.2, the annotation would be [0.2, 0.2] and to represent an interval between 0.2 and 1, the annotation would be [0.2, 1].

Interpretations are mappings from atoms to annotations at some point in time. For example, if I is an interpretation, t the point in time represented by the graph above then: I(Friends(John, Mary), t) = [1,1].

Reasoning on Graphical Structures

Reasoning over graphs is very similar to normal inference over knowledge bases.

  1. We have a set of Facts or initial conditions that can be represented in the graph.
  2. We have a set of Rules that specify the dynamics of the system and how things will change.
  3. We ground the variables in the rule to check if conditions are satisfied.

The main difference is that the grounding of variables to constants is done only to the neighbors of the target we are considering. For example, if we are checking if a rule applies to a certain node, say John, then the satisfaction of the rule’s clauses is determined by the neighbors of John, specifically Mary and Justin.

PyReason

PyReason is an open-source Python implementation for reasoning and deduction in logic programming.

It is based on the concept of Annotated Logic [3] which was found to be a general framework that can be used to capture several fuzzy, real-valued and other logics. In other words, annotated logic is like a superset of the logic used in neuro-symbolic systems. Therefore, instead of inventing a new logic for each neuro-symbolic framework, why not use annotated logic?

PyReason does just this, and as a result, encapsulates all the different logics used in current neuro-symbolic frameworks in addition to adding multiple other important and useful features that could be used in future frameworks.

PyReason is developed as a modern scalable implementation. It uses the numba library to translate most of the Python code into machine code thereby significantly accelerating the runtime while simultaneously retaining the modularity and simplicity of Python.

Features

These are some of the key features that PyReason offers.

  1. Explainability: This key feature allows users to figure out exactly what happened and why it happened in the reasoning process. The output of PyReason is a CSV file that gives the user detailed information about the reasoning process.
  2. Support for reasoning over knowledge graphs: Graphical structures are one of the most commonly used representations of data. PyReason takes as input a knowledge graph in GraphML format and performs reasoning and deduction using data from this graph.
  3. Support for annotations: Previous logic implementations such as Prolog and Epilog did not support annotations. PyReason supports annotations as well as functions over these annotations that can be used in logical rules.
  4. Temporal extensions: PyReason supports reasoning over time with an integrated time component. The software uses interpretations as maps from atoms and time points to annotations. I(predicate(x), t) = [l, u].
  5. Type-checking: Graphical structures are inherently sparse with predicates being associated with specific nodes/edges. For example, in the graph above, we should not consider atoms such as Pet(John). This should be restricted to nodes that are animals only. PyReason is able to handle these cases and cut down on the search space as a result while evaluating the eligibility of a rule.
  6. Ability to detect and resolve inconsistencies: If during the reasoning process we find for example, that I(Pet(Dog), t) = [1,1] and I(Human(Dog), t) = [1,1], PyReason will alert the user about this, set the annotations to [0,1] and make the interpretation immutable to avoid the propagation of the error.
  7. Handling of uncertainty: By making use of annotations, Pyreason is able to handle uncertainty through intervals. It can also represent propositional (True/False) cases by using [0,0] and [1,1]

Using PyReason on an Example

Let’s look at an example where we can run PyReason. Using the same graph above, let’s define facts and rules in YAML — which is the supported format for PyReason.

Graph of friends and their pets

Let’s assume that a person’s popularity (for illustration 😄) is determined by whether they have AT LEAST ONE friend who is popular AND who has the same pet that they do. If this is true, then they are considered popular. We want to study the spread of popularity given some initial conditions and the graph below.

If we were to write this as a logical rule, this would be

Rule for popularity spread

Let’s set an initial condition on Mary so that she is popular. In the following inference steps, based on the rule we have just stated:

  1. Justin becomes popular because he is friends with Mary who is popular and they both have a Cat.
  2. John becomes popular because he is friends with Justin who has just become popular in the previous step and they both have a Dog.

Input Files for PyReason

PyReason can take as input 5 files

  1. A GraphMl file containing the graph (required)
  2. A YAML file containing the pyreason rules (required)
  3. A YAML file containing the pyreason facts (optional but recommended)
  4. A YAML file containing the pyreason labels (optional but recommended)
  5. A YAML file containing the pyreason ipl (inconsistent predicate list) (optional)

We will not be using the ipl (inconsistent predicate list) in this example, but it is a way to have two predicates that are complementary to each other. An update to one will result in an update to the complement. If there are any inconsistencies, they will be detected and resolved. This is a handy way of representing negations.

Generating the Graph

The graph above is a representation of 3 people named Mary, John, Justin and their pets. Let’s see how we can generate such a graph using the networkx Python library.

import networkx as nx

# Create a Directed graph
g = nx.DiGraph()

# Add the nodes
g.add_nodes_from(['John', 'Mary', 'Justin'])
g.add_nodes_from(['Dog', 'Cat'])

# Add the edges and their attributes. When an attribute = x which is <= 1, the annotation
# associated with it will be [x,1]. NOTE: These attributes are immutable
# Friend edges
g.add_edge('Justin', 'Mary', Friends=1)
g.add_edge('John', 'Mary', Friends=1)
g.add_edge('John', 'Justin', Friends=1)

# Pet edges
g.add_edge('Mary', 'Cat', owns=1)
g.add_edge('Justin', 'Cat', owns=1)
g.add_edge('Justin', 'Dog', owns=1)
g.add_edge('John', 'Dog', owns=1)

# Save the graph to GraphML format to input into PyReason
nx.write_graphml_lxml(g, 'friends_graph.graphml', named_key_ids=True)

Creating Labels

In this file we place any label that that we want to associate with nodes or edges. This is the format of the file

---
# Labels that apply to all nodes
node_labels:
- popular

# Labels that apply to all edges (none in this case)
edge_labels:

# Labels that apply to specific nodes. In this case nothing (none in this case)
node_specific_labels:

# Labels that apply to specific edges. In this case nothing (none in this case)
edge_specific_labels:

Creating Facts

With the graph now created in GraphML format and the label popular associated with the nodes, let’s look at how we can set an initial condition. In this case we’re going to set the the label “popular” belonging to Mary as [1,1]. This is a static fact, in other words, it will not change when we go to the next timestep/inference step.

---
# List all facts below
nodes:
fact_1:
node: Mary # Name of the node
label: popular # Name of the label of the node
bound: [1, 1] # Bound of the label
static: True # Whether it applies to all timesteps and cannot change
t_lower: 0 # Starting time
t_upper: 2 # Ending time. In this case it will be active for the first timestep t=0

# Nothing here, we have no edge facts
edges:

Creating Rules

The final step is to create the rule that we stated earlier. This tells us how popularity will spread. The following is the format for rules:

---
# All Rules come under here
rule_1:
target: popular # Target label

target_criteria: # List of all target criteria
# All criteria come here in the form [label, lower_bound, upper_bound]
delta_t: 1 # Delta t, time after which this rule is applied

# The following are the 4 clauses for the rule
neigh_criteria: # List of all neighbour criteria in the form [criteria on node/edge, variable, label, [lower_bound, upper_bound], [equal/greater/less/greater_equal/less_equal, number/[percent, total/available], value]]
- [node, [x1], popular, [1,1]]
- [edge, [target, x1], Friends, [1,1]]
- [edge, [x1, x2], owns, [1,1]]
- [edge, [target, x2], owns, [1,1]]

ann_fn: [1,1] # Annotation function name or bound. See annotation_functions.py for list of available functions. The name of that function comes here
# Could be func_name or [l, u]

This rule changes the bound on the popularity label of a node from [0,1] to [1,1] in the next timestep if the 4 clauses (neigh_criteria) are satisfied.

Running the Program

Now let’s run PyReason with these files! There are two ways to use PyReason, as a command line tool or as a Python library. We will be using the latter method because of its simplicity.

To install PyReason as a Python library follow the instructions here.

import pyreason as pr

# Modify the paths based on where you've stored the files we made above
graph_path = 'friends_graph.graphml'
labels_path = 'labels.yaml'
facts_path = 'facts.yaml'
rules_path = 'rules.yaml'

# Modify pyreason settings to make verbose and to save the rule trace to a file
pr.settings.verbose = True # Print info to screen
pr.settings.atom_trace = True # This allows us to view all the atoms that have made a certain rule fire

# Load all the files into pyreason
pr.load_graph(graph_path)
pr.load_labels(labels_path)
pr.load_facts(facts_path)
pr.load_rules(rules_path)

# Run the program for two timesteps to see the diffusion take place
interpretation = pr.reason(timesteps=2)

# Display the changes in the interpretation for each timestep
dataframes = pr.filter_and_sort_nodes(interpretation, ['popular'])
for t, df in enumerate(dataframes):
print(f'TIMESTEP - {t}')
print(df)
print()

# Save all changes made to the interpretations a file. This is the explainability component
pr.save_rule_trace(interpretation)

The output on the screen should show the following output, which corresponds to what we expected:

Timestep: 0
Timestep: 1
Timestep: 2

Converged at time: 2
Fixed Point iterations: 3

TIMESTEP - 0
component popular
0 Mary [1.0, 1.0]

TIMESTEP - 1
component popular
0 Mary [1.0, 1.0]
1 Justin [1.0, 1.0]

TIMESTEP - 2
component popular
0 Mary [1.0, 1.0]
1 John [1.0, 1.0]
2 Justin [1.0, 1.0]

You should also find a CSV file in the same folder as your Python script which gives you an explainable trace of all the changes that occurred during the program and due to which nodes or edges the rules occurred. There is a column for each clause which contains the nodes or edges that satisfied that clause. This can be extremely useful for debugging and brings explainability into the system!

CSV node-trace file displaying explainability

Running PyReason is that simple. To start running your own inference programs head to the GitHub repository and check out the format of the example YAML files to get a hang of how to write the input files.

Conclusion

As we’ve seen, PyReason is an extremely versatile, scalable, modern piece of software for reasoning over graphical structures. Because it encompasses several logics and possesses numerous desired features, it has the potential to be integrated into the logical component of any (present and future) neuro-symbolic framework.

There are several ongoing improvements being made to the software, one of which includes the use of a parser so that the user can input facts and rules in Prolog/Datalog style instead of in YAML.

About author

Dyuman Aditya is currently a research intern at Arizona State University. His interests lie in the fields of Machine Learning, Neuro Symbolic AI and Robotics. He previously worked at a robotics and AI startup called Telekinesis at TU Darmstadt in Germany and as a research intern at Carnegie Mellon University. He graduated with honours from the Sri Aurobindo International Centre of Education with a BSc in Computer Science and Mathematics.

References

[1] Riegel, Ryan, et al. “Logical neural networks.” arXiv preprint arXiv:2006.13155 (2020).

[2] Badreddine, Samy, et al. “Logic tensor networks.” Artificial Intelligence 303 (2022): 103649.

[3] Kifer, Michael, et al. “Theory of generalized annotated logic programming and its applications.” The Journal of Logic Programming 12 (1992): 335–367

--

--

Dyuman Aditya
Dyuman Aditya

Written by Dyuman Aditya

Currently a research assistant at Arizona State University. Pushing the boundaries of research in Machine Learning, Robotics and Neuro Symbolic AI.