Cybernetics Versus Informatics: Understanding the Type Wars

Two competing theories of the fundamental nature of system organization underly this contentious debate in computing

The early history of computers was both a story of mathematical and electrical engineering achievements, and because of this, early there emerged two distinct but interacting cultures of computing. These two traditions were reflected in the first high-level programming languages Lisp and Fortran, Lisp being a product of mathematical interests, and Fortran the interests of engineers seeking to efficiently control machines with greater ease.

This article is not about the differences between mathematical purists and technical pragmatists (as there has seldom been a clear division, with each borrowing from the other.) Our focus here is on the theoretical orientations of informatics and cybernetics that have found affinity with the mathematical and technical cultures respectively. Note that this is not to imply that cybernetic approaches don’t make use of math, but rather to suggest that the horse pulls the cart and not vice versa. These two approaches remain at odds with one another, are at the heart of what we have come to regard as the “The Type Wars”. While the terminology as it has been used in the field is somewhat inconsistent, it is more or less clear to use the term informatics to refer to the goal of systems that consume data, manipulate it in terms of its semantic meaning, and produce meaningful results, and cybernetics to refer to the goal of systems that satisfy requirements meaningfully and adaptively in novel and uncertain environments.

In addition to bringing some clarity to this often obfuscated conflict underlying the type wars, I will try to present a convincing argument that while category theory may help our craft in limited but important ways, the greater aspirations of informatics are not likely to ever be delivered on, in computing or in cognitive science. Cybernetic approaches in contrast show a great deal of promise, and the current wave of progress in model-free AI and machine learning (notably not in computationalist methods) underscores this.

The Informatics Tradition in Computing and Psychology

Lisp being an elegant and expressively powerfully language found a niche in early forays into artificial intelligence, and became associated with the idea of symbolic computation, or computing with mathematical expressions. Symbolic computation was the province of computationalism, the theory that the brain is a computer, and that thought and computation are one and the same. Relatedly, and often in connection with AI and computational cognitive modeling efforts, cognitive psychologists began to adopt a neo-Cartesian philosophy of “information processing” borrowed from computing, seeking mechanisms by which raw sensory data is converted into a representation of the world, as it is processed and passed from brain module to brain module.

While the failure of the information processing model to significantly advance our understanding of cognition (more on this below) has been more gradual and protracted, the application of these ideas in computing famously collapsed under the weight of promises it could not deliver on. Unfortunately for the broader adoption and use of Lisp as a language, the close association with this naively ambitious and ultimately failed experiment in industry and academia meant that the AI Winter was also a cold time period for Lisp.

Even as symbolic AI collapsed and major problems were found with the theory of information processing, another thread of the mathematical tradition in computing that is so old as to pre-date the actual existence of computers, quietly continued to grow and improve: static type checking and formal program verification. Finding its roots in type theory and category theory, languages such as ML, Miranda, Haskell, and Coq leverage formal proof as tool for improving code quality.

While this might seem nothing more nor less than a modest and practical approach to safely developing software, today the proponents of the “programs are proofs” school aspire to epic ambitions rivaling those of the first wave of artificial intelligence. The highly influential Rosetta Stone paper for example asserts that the equivalence of propositions with types, physical systems, and topological manifolds, all encompassed by the object abstraction of category theory, points to the development of a “general science of systems and processes”. Far from a mere practical concern, category theorists have resurrected the information processing approach. This time we will successfully model the world with software, so they say. This time, they claim, we’ll successfully build software that captures meanings in representations, and by processing these representations we will extract and produce other useful meanings.

The Cybernetic Tradition in Systems Biology and Computing

Perhaps ironically, or perhaps due to the success of a small remnant Lisp community’s deliberate attempts to shake off old associations over the course of the 90s and 00s, Lisp came to be associated with a new narrative. As Lisp regained some popularity in the early 00s it came to be regarded as the alternative to an overly purist kind of functional programming preoccupied with types and proofs. Lisp in effect, switched sides to the other culture of computing — not in the sense of static vs. dynamic (it had always been dynamic) but in the sense of being positioned opposite the objectives of informatics, and playing up its strength as a dynamic language over its strengths in symbolic processing. After all, it had gained a reputation as a flexible language for things like evolutionary computation, as a diverse multi-paradigm set of building materials for domain-specific languages, and in general as a powerful tool enabling one to take on ambitious problems of all kinds.

By the end of the decade, Rich Hickey, an outspoken critic of the many of the claims made by static typing advocates, had succeeded in delivering Clojure and building a larger Lisp community than the world had ever before witnessed. Clojure also brought many good ideas into Lisp from other functional languages, such immutable data structures and laziness. In May of 2016, Rich Hickey announced clojure.spec, integrating a specification system into the core of the language, and took the opportunity to say:

expressivity > proof
There is no reason to limit our specifications to what we can prove, yet that is primarily what type systems do. There is so much more we want to communicate and verify about our systems. This goes beyond structural/representational types and tagging to predicates that e.g. narrow domains or detail relationships between inputs or between inputs and output. Additionally, the properties we care most about are often those of the runtime values, not some static notion. Thus spec is not a type system

Doubtless some read this paragraph in clojure.spec’s stated rationale as nothing more than an opinionated jab at the programming language competition, but that would ignore what is actually going on here. This is a declaration of Clojure doubling down on it’s commitment to a philosophy of expressiveness and robustness through adaptiveness, in contrast with a philosophy of pre-conceived proof and self-contained meaning independent of runtime environments.

In the “Type Wars” article we referred to above, Bob makes a case that TDD (and presumably generative testing) is what dynamic languages have to offer in place of a type system. While the truth of this is evident to those who have experience with it, what might not be readily evident is that this is one of many examples of the cybernetic tradition in software development. TDD (and BDD, etc) is an outside-in approach that emphasizes the runtime behavior of the system under test. The purpose is to make systems more robust to changing conditions. Tests can be viewed as regulating the behavior of code, keeping functional requirements in place as the code that produces behaviors evolves over time.

To explore further, it’s important to note here that the history of this approach of adaptive control or environmental shaping of self-organizing systems is not merely a practical convenience, but one that that has been developed over many decades of scientific investigation. The early work on cybernetics as early as the 1940s focused on adaptive behavior in physical systems such as machines, and was the precursor to later work in machine learning, such as reinforcement learning. The aim of cybernetics was to find general principles of control applying to all systems. By the 1970s, cybernetics began to focus more on systems biology and mechanisms of control in living systems. In this period, Francisco Varela coined the term “autopoiesis” to describe the phenomenon of systems capable of maintaining and reproducing themselves. The study of autopoietic mechanisms led to the the development of enactivism, the theory that perceptual experience is the fully constituted by sensorimotor skills of probing and effecting environments, thus constructing modes of access and presence.

Earlier we mentioned the failure of cognitive science to conceive of a way for raw sensory data to be converted into a representation of the world, such that it could be processed meaningfully by a computer-like brain, enabling intelligent decisions. We drew a direct line of comparison to same problem not having been solved in computing, despite major efforts in GOFAI and now in category theory. The enactivist cybernetic framework in contrast does not suffer from these problems. Let’s consider for example a dog trying to access a treat on the other side of a barrier. First of all, the dog will skillfully and flexibly exploit affordances of the environment, so there is no need to represent a world that one is directly present in. Secondly, the dog perceives a thing that she has learned will produce a rewarding outcome, and experiments with available affordances, such as the climbable aspects of the barrier, in order to get to the treat. The informatician would overcomplicate this by trying to devise a representation of the world in the dog’s brain, and then process it to produce a transformation of that representation that would generate an action plan, or something along those lines. In the enactive frame of reference the only thing that needs to be in the dog’s brain is some way of remembering what skills work effectively with what direct contexts of action, where knowing the contexts of action is on equal footing with any other skill. The dog applies sensorimotor skills that have worked in similar circumstances, and no representation or manipulation of representations is ever needed.

Given the problems of the informatics approach that cybernetic approaches do not suffer from, it seems likely that we will continue to solve such problems by developing better adaptive meta-heuristic frameworks, and applying them to everyday computing needs. The dream of adaptive software agents that we train rather than program is not a pipe dream so much as a likely outcome of current progress in areas of research like reinforcement learning. Indeed, recent results by Google’s DeepMind and others show major advancements in unsupervised control from high-dimensional sensory inputs. I’m placing a long-term bet against category theory as a silver bullet, and in favor of soft-computing frameworks in which we compose semi-autonomous adaptive agents to solve problems.