Algorithmic Intelligence

William L. Weaver
TL;DR Innovation
Published in
5 min readMar 31, 2018

--

Very Real Advances in the Hierarchical Temporal Memory Platform

I had the great pleasure of dinning with Dr. John McCarthy, Professor Emeritus of Computer Science at Stanford University, when he received the 2003 Benjamin Franklin Medal in Computer and Cognitive Science at an award ceremony in Philadelphia. In addition to being attributed to coining the phrase “artificial intelligence (AI)” in 1955, Professor McCarthy invented the LISP programming language, the “if-then-else” programming structure, program recursion, and the concept of preemptive multitasking to increase the availability of scarce resources. At the time of the award ceremony, Professor McCarthy was in poor health and not very conversant. Even though he did not regale the members of our table with witty tales of problem solving nor develop a new algorithm in real time, it was obvious we were in the presence of an extremely intelligent human being.

Image by kalhh on Pixabay.

So the question at hand is: How was a 27-year-old human able to initiate an intense study of learning and intelligence such that a machine can be made to simulate it while, over 50 years later, it appears machines are nowhere close to that goal? Perhaps the answer to this question was offered by Professor McCarthy himself in a paper he published in 1959 wherein he describes the qualification problem, which is concerned with the impossibility of listing all of the preconditions required for a real-world action to have its intended effect. The qualification problem is tightly correlated with the if-then-else structure used in procedural programming and formal logic. In order to select the correct action from the then and else cases, the current state of reality must be first qualified by a series of nested if conditions. This turns into the task of defining all contingency plans before the action can be taken, a task in itself that may be more complicated than the action being contemplated. The War Operation Plan Response (WOPR) AI depicted in the 1983 film, WarGames, was fictionally set to the task of generating contingency conditions for the problem of global thermonuclear war. WOPR’s ultimate conclusion that “the only winning move is not to play” is not only a poetic treatise on nuclear warfare but also an enigmatic solution to the qualification problem; “It will take infinite time to qualify the problem, therefore the ultimate action is no action at all.”

While this philosophical exercise is great fodder for the classroom and journals, there exists a very real need for software systems capable of taking action in real-time situations involving myriad sensor inputs, state variables, situation assessments and environmental conditions. Advances in formal and fuzzy logic, expert systems, neural networks, genetic algorithms, state machines, relational databases, and swarm intelligence each have displayed success in limited environments having well-controlled and highly defined preconditions; however, there is a missing, overall system of algorithms that shows promise of integrating each of these AI components into a functioning whole capable of learning and adapting to an evolving environment of newly-discovered preconditions.

Jeff Hawkins addressed this missing link in his 2005 book, On Intelligence [4], in which he stated the case for looking to nature’s solution to the qualification problem in the form of the neocortex of the mammalian brain. This thin layer of networked neurons envelopes the brain and serves as an interface between the functional brain subsystems and the outside world, complete with its infinite universe of preconditions. Hawkins and his colleague and Numenta co-founder, Dileep George, describe the importance of studying the neocortex for its ability to overcome the No Free Lunch (NFL) theorem that states a particular learning or optimization algorithm is only superior to all other algorithms because of built-in assumptions that have been placed there by the algorithm’s designer. The NFL is an instance of the qualification problem in as much that it states an algorithm works well when the entire set of unknown variables is set to default values.

The biological function of the neocortex as theorized by Numenta is that of a memory-prediction algorithm having the following characteristics:

The function of the neocortex is to construct a model for the spatial and temporal patterns to which it is exposed. The goal of this model construction is the prediction of the next pattern of input.

The neocortex itself is constructed by replicating a basic computational unit or node.

The nodes of the neocortex are connected in a tree-shaped hierarchy.

The neocortex builds its model of the world in an unsupervised manner.

Each node in the hierarchy stores a large number of patterns and sequences.

The output of a node is in terms of the sequences of patterns it has learned.

Information is passed up and down the hierarchy to recognize and disambiguate information propagated forward in time to predict the next pattern.

The predictions made by the neocortex are compared against the pattern of current sensor input. When there is a mismatch, the neocortex is “surprised” that its assumptions where wrong, and it is tasked with finding an appropriate set of assumptions that corrects the situation in a process we call “thinking.” This process is both generic and ubiquitous, does not require specialized NFL design assumptions, and is used to interface all of the senses. In his Ph.D. thesis, Dileep George describes the algorithmic and mathematical counterparts of the biological memory-prediction framework that have been developed and known collectively as the Hierarchical Temporal Memory (HTM) algorithm. The HTM has been realized in Python to produce the Numenta Platform for Intelligent Computing (NuPIC) API that is available for download under licenses for academic research and commercial systems. Initial applications have been developed for computer vision that are very robust against noise, scale, inversion, rotation, and perspective that would otherwise introduce an explosion of specific preconditions. Some members of the AI community have criticized the HTM approach as a repackaging of existing technology; however, the HTM framework positions itself as an interface between reality and AI algorithms much like the neocortex serves as a liaison between biological sensor-motor control systems and the world.

This material originally appeared as a Contributed Editorial in Scientific Computing 25:9 September/October 2008, pg. 14.

William L. Weaver is an Associate Professor in the Department of Integrated Science, Business, and Technology at La Salle University in Philadelphia, PA USA. He holds a B.S. Degree with Double Majors in Chemistry and Physics and earned his Ph.D. in Analytical Chemistry with expertise in Ultrafast LASER Spectroscopy. He teaches, writes, and speaks on the application of Systems Thinking to the development of New Products and Innovation.

--

--

William L. Weaver
TL;DR Innovation

Explorer. Scouting the Adjacent Possible. Associate Professor of Integrated Science, Business, and Technology La Salle University, Philadelphia, PA, USA