Solving problems the human way: Theory of Mind

Symbiosis Unleashed: Amplifying AI with the Integration of Logic and Generative Large Language Models

Carlos Araya
8 min readJun 27, 2023

--

As we delve deeper into the AI era, we are confronted with a growing need for supporting software solutions to real-world business and engineering problems. In response to this need, we propose the marriage of formal logic, particularly a new innovative modal logic called Aleph, with generative Large Language Models (LLMs).

This alliance is more than just a merger; it’s a symbiosis that aims to give AI a considerable boost, enabling it to effectively address the solution of complex problems that exist in the real world. Through the use of logic, we propose a promising strategy to generate robust, explainable, rational, scalable solutions that leverage LLMs without reasoning errors.

Starting with the core benefits of this symbiotic integration, we elaborate on its strong mathematical foundation, ability to handle transitions, adherence to the natural Theory of Mind paradigm, and compatibility with industry-level purpose-specific problem solvers.

The Aleph logic, we argue, is an invaluable tool for this purpose. With its clear semantics, impressive expressiveness, and natural expansiveness, Aleph grants AI unprecedented abstraction in problem-solving strategies.

We illustrate how the union of LLMs and Aleph can simplify tasks like ontology engineering, the abstractions of processes, secure valid deductions, and offer explanations to users.

We then explore the roots of Aleph, its evolution to represent and reason about all potential states of a problem, and its unique characteristics. Additionally, we exemplify practical applications of Aleph in sectors such as e-commerce, supply chain optimization, and fulfillment processes, underlining its effectiveness and broad-ranging potential.

In summary, we aim to make a persuasive argument for the synergistic relationship between logic and LLMs. We are convinced that this collaboration could herald the advent of a new age of AI applications — more logical, explainable, scalable, robust, and, most importantly, attuned to the complexities of real business and engineering scenarios.

Understanding the Aleph Logic

Aleph is a modal logic specifically designed for business and engineering applications targeting an agent-based approach (Artificial General Intelligences). It offers a formal language that extends first-order logic by incorporating modal concepts like necessity and contingency.

However, Aleph’s most significant advancement is in its well-defined semantics, rooted in the potential states of business or engineering systems. This logic offers a mathematical framework for modeling solutions where the manipulated entities are not numbers but rather the spaces of problem states. Aleph is sound and complete in this regard.

Embracing the Benefits of Symbiosis

The union of Aleph modal logic and generative LLMs presents four key benefits:

1. A strong mathematical foundation that allows robust concepts such as belief systems, behaviors, ethics and obligations, self-reflection, and more.

2. To build agents that not only represent specific situations but also capture transitions that depend on previous situations. In fact, Aleph’s agents fulfill the AGM postulates[1], ensuring logical principles for rationality and coherent belief revision.

3. To solve problems using a natural Theory of Mind approach. This cognitive ability to understand and attribute mental states and intentions to others and ourselves[2], is a successful evolutionary capability that facilitates analysis and adaptation to complex situations involving multiple agents in a modularized manner.

4. The integration of logic-based systems with specialized industry-level problem solvers, such as theorem provers and deductive technologies, constraint satisfaction, and optimization solvers, allows the modeling and solving of large-scale problems intuitively.

The Aleph logic’s clear semantics, remarkable expressiveness, natural expansiveness, and problem-solving strategies that are reminiscent of how humans represent the world and deal with problems through the modeling with equations and variables offer a new high level of unparallel abstraction.

The Future with Aleph and Generative Language Models

There’s a promising opportunity to unite formal logic with the advancements in LLMs, paving the way for more explainable, scalable, robust, and rational AI business and engineering applications.

The potential benefits of this combination are numerous:

1. LLMs can help to streamline the laborious task of ontology engineering, typically a prerequisite for large logical systems, by helping to discover concepts and relationships in corpora of different sorts, including documentation, manuals, process descriptions, and business transactional data.

2. The relationships between formal languages and automata and the Internet of Things sensors could help bring the LLM’s capabilities to learn from observation and create automata representing abstractions of diverse processes.

3. Logics are essentially equivalent to the language containing all the theorems inductively generated from their axioms and inference rules[3]. If the methods of LLMs are constrained to such language, we will ensure that the deductions carried out are valid. The real value of this characterization comes from the following.

4. An often-overlooked feature of logic is that they are inherently extensible with other theories. If the base logic, like Aleph, has certain properties, like soundness and completeness, incorporating a new logic dealing with other concepts, like time, sortals, or business theories, a larger and richer symbolic language will emerge with the same property, as long as the logic axioms and inference preserve the property[4]. This unique capability enables a system like Aleph to be the ideal foundation for constructing more potent AI theories.

5. LLMs can help synthesize descriptions of the steps carried out by long deductive and calculating computations to provide linguistic or graphical explanations to users.

6. In addition, generative methods can gradually learn to choose the most promising deductive and calculating steps in large search spaces, similarly as we become experts in certain domains. This could become a fantastic opportunity now that proof methods are starting to dominate search strategies in the struggle with the complexity and practicality of real-life situations due to an overwhelming abundance of intricate details[5]. The approach has already been explored in mathematics with some good results[6].

7. Finally, a similar approach could help mitigate the computational irreducibility problem, as pointed out by Stephen Wolfram[7], by avoiding exploring all possible branches to find viable solutions.

LLMs are a testament to how new approaches can overcome several limitations. Aleph doesn’t break any theoretical limitations, but since it is tailored to represent and reason over business problem states of computer applications, and since it can seamlessly complement other technologies such as LLMs, theorem provers, constraint-satisfaction, and linear programming tools, it is in the right direction for more expressive, manageable, expandable, and scalable solutions.

Aleph: A Deep Dive into its Foundations

Aleph was developed to represent and reason about all possible states that a business, science, or engineering problem can assume, commonly known as the “possible worlds” of the problem. This approach diverges from the conventional focus on a single state managed by most programming languages.

The notion of “possible worlds” is a semantic concept that can be traced back to the work of Philosopher Gottfried Leibniz in the 17th century, who proposed the existence of an infinite number of possible worlds, each representing a distinct way the world could be. Saul Kripke and David Lewis proposed the idea of using it as a framework to comprehend the semantics of modal statements in the 20th century. Kripke suggested that possible worlds could be considered abstract entities, each representing a comprehensive and internally coherent representation of how the world could be, whereas Lewis considered them concrete entities.

To illustrate, consider mathematics, where a numeral such as ‘4’ denotes the property shared by all sets with exactly four elements, known as the number four[8]. In Aleph, a statement like ‘Year = 2023’ precisely denotes what is shared by all the “worlds” in which Year equals 2023. Aleph ensures a complete correspondence between statements and the space constituted by the possible states where those statements hold. Technically speaking, the Aleph logic is both sound and complete regarding the universe of “possible worlds” within a given finite-state problem or system — referring to one with a defined yet potentially extensive range of possible states.

Aleph is a significant leap forward from previous efforts of using modal logic in AI. It introduces several innovative features to tackle practical, real-world problems effectively following straightforward procedures:

1. The universe of problem states is generated using typed structures to describe the problem domains (data) and functions and variables to reference the domain values.

2. The problem spaces are described and operated using statements composed of operations like and (intersection), or (union), not (complement), and necessity (for differentiating whether a given statement is true across all states).

3. Find parameters using machine learning or other methods.

4. Model solution using reflective equivalences (‘equations’) as in finance, engineering, and economy.

5. Specify goals using maximizations and reflection.

6. Aleph finds the roots of the equation — singularities, using advanced proprietary and off-the-shelf algorithms.

Practical Applications

Aleph logic and implementation have been tried in several applications, such as client agents for e-commerce, problem specification, supply chain optimization, and fulfillment processes.

Aleph logic expressions seamlessly translate into the representations employed by various problem-solving methods. Therefore, Aleph can be used to automate the problem description required by these methods and analyze their outputs, facilitating result improvement through interactive processes. These automated interactions include relaxing constraints, adjusting parameters, and performing belief revisions to produce new more satisfying descriptions. The decision-making process carried out by Aleph during these interactions is guided not only by business-defined strategies and policies but also by learned insights. Aleph implementation uses algorithms based on Market-Basket Analysis, Shapley values, and Behavioral Economic principles. The amalgamation of abstraction and potency offered by Aleph represents a remarkable leap forward for automating programming tasks and the reliable automation of human-machine interactions.

With artificial entities possessing beliefs, behaviors, and personalized rational or biased adaptation capabilities, it is exciting how natural it is to model problems using agents. These agents represent various problem actors or entities representing clients, knowledge workers, decision-makers, physical entities, or organizations. Singularities has developed methods to learn and represent behaviors and decision patterns suitable for analysis, explanation, and, more interestingly, for dynamic adaptation within a dynamic flow of interactions.

A Brief History

The term Aleph was coined in 2012 to name the efforts of creating the logic system and platform described in this document and was inspired by Borges’s ‘Aleph’ book. Singularities was founded in 2014 in California to bring Aleph’s application to the market. The beginnings of Aleph can be traced back to the 1980s and 1990s when the modal logic Z was proposed for knowledge representation and reasoning around the frame problem[9],[10]. The last ten years of research in Aleph done by Singularities significantly improved the logic theory and semantics and delivered a scalable implementation to target business and engineering applications.

[1] https://plato.stanford.edu/entries/logic-belief-revision/

[2] David Premack proposed that consciousness emerges when an individual is not only aware of others having thoughts and intentions (theory of mind) but also when they are aware of their own mental states and thoughts.

[3] Alfred Tarski’s Syntactic Approach.

[4] Kurt Gödel’s Incompleteness Theorem.

[5] https://people.eng.unimelb.edu.au/pstuckey/PPDP2013.pdf.

[6] https://openai.com/research/improving-mathematical-reasoning-with-process-supervision.

[7] https://mathworld.wolfram.com/ComputationalIrreducibility.html.

[8] Proposed by German mathematician and logician Gottlob Frege.

[9] Frank Brown, ed. Proceedings of Workshop on The Frame Problem in AI. Elsevier, 1987.

[10] Carlos Araya. “On the Knowledge Representation Capabilities of a Modal Logic”. In: Proceedings of 1994 Florida AI Research Symposium, 1994.

--

--