Introduction to Abot’s Critical Thinking

Recent AI research defines intelligence in terms of intelligent agents. An “agent” is something which perceives and acts in an environment. A “performance measure” defines what counts as success for the agent. We are way past the era of asking, “Can a machine act intelligently?” or “Can it solve a problem that a person would solve?” The answer to both is obviously and emphatically — yes. But when Abot says we give machines the power to reason, what exactly do we mean?

To reason is to understand the logical connections between ideas. The ability to reason, to be responsive to variable subject matter, issues, and purposes has two components: 1) a set of information, idea generating and processing skills 2) using these skills to guide behavior/responses. Abot goes beyond just the complex cascade of logic streams. Abot Intelligent Agents don’t just reason in the base sense, they are able to adopt critical thinking.

Critical thinking is defined as the process of actively and skillfully conceptualizing, applying, analyzing, and/or evaluating information gathered from, or generated by, observation, experience, reflection, reasoning, or communication, as a guide to belief and action.

Possession of these skills, must by their very nature, mean the continual use of them, a continual cycle of self-improvement.

This is very different to the mere acquisition and retention of information alone, because it involves a particular way in which information is sought and treated. It employs logic, for sure, but goes beyond moving every conversation through the same logical steps to arrive at a solution. Over time, Abot will adapt and change the steps themselves.

According to Linda Elder (Sept 2007), the ability to reason enables the possessor to “strive to improve the world in whatever ways they can and contribute to a more rational, civilized society.” This is Abot’s goal, to help customers get what they want in the shortest time possible.

Reason and critical thinking are born out of millions of real-world experiences through our lives. For example, it’s easy for a human to answer the question: “The President is in Washington D.C. Where is the President’s left foot?” Most humans will know it’s also in D.C, but machines struggle even with these basic problems. To answer that question, a human must know that the President is a person, that people have left feet, and since feet are attached to a person, the left foot must be in the same place as the President. Machines lack the experiential knowledge necessary to answer supposedly simple questions.

Modern approaches to this logic problem have in some ways regressed since the Fifties, when symbolic A.I was an active field of A.I. research. Symbolic A.I. aimed to solve logic problems like these performed extremely well in very narrow scenarios. However, when the scenarios became more complex — anything close to the real-world — the approach fell apart.

Describing every object in the world in a machine-readable format is a large, but achievable goal, but it isn’t enough. Today’s knowledge graphs like Google’s, although covering a massive number of topics, don’t possess a deep understanding of the entities they’re listing. For instance, Google’s knowledge graph, when asked about a giraffe or a kangaroo, can retrieve images, descriptions and more. Yet it cannot explain the difference between a giraffe and a kangaroo because it does not know what the concepts themselves represent.

The key then to building “thinking machines” is to provide them with not just an understanding of objects around them, but the relationships and interactions between all manner of things. When an A.I.’s training domain expands to cover millions of objects, there are an uncountable number of possible interactions between them. No human could ever record them all one-by-one, and to this day no approach exists to automate that learning process.

By learning from symbolic A.I.’s past failures and successes, we find another approach which greatly reduces the number of possible interactions one needs to train. By narrowing our domain and storing information as a hierarchy (left foot > person > President Obama), filtering the resulting data with several logic rules, and applying modern machine learning techniques, we can simulate an intelligent response to the previous problems — one that almost seems human.