Tikos Tech

Tools for building trustworthy AI

WTF is KRR and why should you care?

Mike Oaten
Tikos Tech
Published in
4 min readOct 25, 2024

--

Knowledge Representation and Reasoning (KRR) is a concept central to AI and, more specifically, trustworthy AI.

It is important because AI applications and agentic systems will need to convince (truly convince) users, system owners, and regulators they can be trusted to become embedded in society before widespread adoption can take off — or not.

Do LLMs reason? Do they have emergent reasoning capabilities? Are they (by design) incapable of cognitive behaviour and are simply pattern-matching and mimicking? This debate is brewing up nicely. On one side LLM purists believe with ‘scale’ it all just ‘happens’; on the other, more pragmatic voices whisper that ensemble or neural-symbolic approaches may be better bets.

We are at a fascinating inflexion point. It is worthwhile learning more about the key concept in the debate.

KRR, a definition

Knowledge Representation and Reasoning (KRR) focusses on how to encode information about the world in a form a computer system can use to solve complex tasks, draw inferences, and make decisions.

There are various models and implementations (see below) but most abstract source information into a knowledge layer (the representation step) and then operate cognitive-like functions over the representation (the reasoning step).

A Little History

The history of KRR spans decades changing in lock-step with advances in artificial intelligence, computer science (and processing power), and cognitive science.

1950s–1960s: Beginnings of AI and Symbolic Logic

  • Logic Foundations: Knowledge representation began with symbolic logic, inspired by philosophers and logicians (George Boole, Bertrand Russell).
  • Turing’s Influence: Alan Turing’s theories on computation sparked the idea that machines could “think” like humans.
  • First Knowledge-Based Systems: Programs like the Logic Theorist were created to replicate human reasoning by proving mathematical theorems (Newell, Simon, and Shaw).

1970s: Semantic Networks, Frames, and Rule Systems

  • Semantic Networks: Networks of linked concepts began to represent knowledge visually for easier interpretation.
  • Frames: Marvin Minsky introduced “frames” as flexible structures for storing stereotyped knowledge.
  • Production Rules: Rule-based systems like MYCIN applied “if-then” logic to problem-solving in specific domains.

1980s: Rise of Expert Systems and Logic-Based Formalisms

  • Expert Systems: Systems like DENDRAL applied rules and expertise to solve complex problems in medicine and manufacturing.
  • Description Logics: Description logics organized hierarchies and relationships, providing a foundation for modern ontologies.
  • Knowledge Representation Languages: Programming languages like Prolog allowed logic-based knowledge encoding and processing.

1990s: Ontologies and Knowledge Bases

  • Ontologies: Structured, domain-specific knowledge models emerged to organize large amounts of data on the web.
  • Knowledge Representation Standards: Formats like KIF allowed systems to share knowledge across different platforms.
  • Case-Based Reasoning: Systems used past cases as references for solving new problems, adding adaptability to rule-based reasoning.

2000s: Semantic Web, Probabilistic Reasoning, and Cognitive Architectures

  • Semantic Web and OWL: Tim Berners-Lee’s Semantic Web aimed to make web content understandable by machines using OWL for data organization.
  • Probabilistic Reasoning: Bayesian networks allowed reasoning under uncertainty by updating beliefs with new evidence.
  • Cognitive Architectures: Frameworks like ACT-R simulated human thinking by integrating symbolic and procedural knowledge.

2010s: Knowledge Graphs, Deep Learning, and Hybrid Models

  • Knowledge Graphs: Google’s knowledge graphs connected entities in a web-like structure to improve search and recommendation systems.
  • Deep Learning: Neural networks began representing and processing knowledge, enabling deep learning in language and vision.
  • Reinforcement Learning: Systems learned decision-making from experience, blending learned rules with structured reasoning.

2020s and Beyond: Explainability, Language Models, and Commonsense Reasoning

  • Explainable AI (XAI): KRR shifted toward creating models that could explain their reasoning to users. Knowledge graphs and symbolic rules are increasingly used with neural methods to produce models with explainable reasoning paths.
  • Commonsense Knowledge: Projects like ConceptNet aimed to embed commonsense knowledge into AI for better human understanding.
  • Large Language Models: Integrated vast training data in natural language for more natural, informed responses based on neural net pattern matching.

“If You Can’t Explain It, You Don’t Understand It”.

What emerges, I think, is this simple test. Can the output of an AI system be explained in trusted terms? Or another way — “If You Can’t Explain It, You Don’t Understand It”. Understanding is key because it enables the reasoning employed in a specific case to generalise and adapt to new situations.

The output of LLMs or deep neural nets can not be adequately explained by this measure because they are inductive reasoning systems. They start with specific facts (training data) and then generalise to generate their output.

Conversely, some examples in the above history can be because they are deductive systems, switching the direction: going from general rules to specific output. These by definition can explain their reasoning (but have some significant limitations — hard rules being a constraint on adapting to new situations).

And there are others in between. Among them, Abductive (case-based), and Analogical (use of comparisons) could be candidates — when combined with deep neural nets — to tackle more complex reasoning tasks AND be able to explain their outputs in a human-friendly, trusted manner.

Disclosure: The history summary was generated by an LLM with human review and editing, the rest of the article is organic.

--

--

Mike Oaten
Mike Oaten

Written by Mike Oaten

I'm the CEO of tikos.tech, providing tools for developers to build trustworthy AI applications.

No responses yet