What Deep Learning can Learn from Cybernetics

Carlos E. Perez
Intuition Machine
Published in
8 min readDec 16, 2018
Photo by Maximilian Weisbecker on Unsplash

“Progress, far from consisting in change, depends on retentiveness. When change is absolute there remains no being to improve and no direction is set for possible improvement: and when experience is not retained, as among savages, infancy is perpetual. Those who cannot remember the past are condemned to repeat it.” — George Santayana

Not many Deep Learning (DL) researchers have spent enough time reading about Cybernetics (derived from the Greek word meaning “the art of steering”). This is indeed unfortunate in that in that DL research continues to “reinvent the wheel”. The foundations of DL can be traced to much earlier than the work of McCulloch–Pitts’ model of the artificial neuron. It can be traced back to their predecessor, Norbert Weiner who wrote the book “Cybernetics: Control and Communication in the Animal and the Machine”. (Note: Both McCulloch and Pitts performed their research under Wiener)

“Its like a finger pointing away to the moon. Don’t concentrate on the finger or you will miss all that heavenly glory.” — Bruce Lee

Unfortunately, most DL researchers focus their attention on the artificial neuron and not the heavenly glory of Cybernetics. Allow me to further explain the deep wisdom found in Cybernetics.

The history of AI has predominantly been driven by the Good Old Fashion AI (GOFAI) narrative. As a consequence of this formalization, we arrive at the disembodied viewpoint. The Deep Learning narrative since 2012 has now become the dominant narrative. Unfortunately, it still inherits the essence of the GOFAI narrative, this is despite originating its own narrative dating back to the publication of Wiener’s Cybernetics (1948) at the height of the previous peak in Connectionist thinking. Here is an illustrative graphic that shows the peaks and valley of the two competing AI narratives.

Alan Turing had in fact explored Connectionist thinking but his papers were not published until 14 years after his untimely death in 1954. Norbert Wiener who collaborated with Turing had passed away a decade later (1964). So Turing’s thoughts would not see the light of day until 1968, the beginning of the emergence of symbolists thinking.

http://www.cs.virginia.edu/~robins/Alan_Turing%27s_Forgotten_Ideas.pdf

With impeccable timing, connectionist thinking was squashed a year later by Minsky and Pappert (1969) in their infamous book criticizing Rosenblatt’s Perceptron. This initiated the coup that began the imperialist reign of the Symbolists. History is always written by the victors and thus the Symbolists buried Cybernetic thinking (perhaps permanently). This was long enough for most of its ardent supporters to retire and eventually pass away.

The Neural Network narrative was treated as a toxic research topic for several decades. Yann LeCun reminisces that in 1983, Geoffrey Hinton and Terrence Sejnowski had to disguise a paper “Optimal Perceptual Inference” using terminology that would not reveal its neural network origins. The phrase “Neural Network” is never mentioned in the body of the paper and its only use is in the bibliography referencing Hopfield’s work. LeCun remarked, “Even the title of their paper was cryptic.” Even as late as 2003, to add insult upon injury, Gary Marcus in his book “The Algebraic Mind” continued to push the Symbolists’ dogma arguing that “neural systems are inconsistent with the manipulation of symbols.”

Unfortunately, lost in this dark age of AI were the original ideas discovered in Cybernetics. Paul Pangaro is one of the few remaining dutiful monks of Cybernetics “ancient religion” and he provides us with a lengthy definition of cybernetics. I do not wish to regurgitate his words, so I do recommend you read his exposition. The essence of his explanation of cybernetics is captured in this graphic:

Cybernetics

If you are coming from the classical perspective of AI (both GOFAI and DL) you will have the left side of the diagram above serves as your mental model for AI. This is the pervasive model of AI and it has been indoctrinated for decades into its disciples. It doesn’t matter if you are a Symbolists or Connectionist, the mental model has been cleansed of any remnants of the ancient teachings of Cybernetics.

I’ve seen the above diagram several times, but it took me a while of study to truly appreciate what it meant. Allow me, therefore, to decipher its meaning from the perspective of current Deep Learning research.

Let’s begin at the top of the diagram.

Cognitive systems are autonomous. To understand this, we have to realize what distinguishes biological life from inanimate objects. Biological life is autonomous, they exhibit their own intentional behavior. That is, they all are cognitive systems with their own autonomous behavior evolved towards surviving within their adapted environments. I explore this in more detail in “nano-intentionality”.

Organisms map through an environment back into themselves. To understand this, we have to begin with the viewpoint that all cognition originates from embodied learning. An organism learns by interacting with its environment. There is however a relationship between environment and organism that relates to both memory and representation. An organism has bounded rationality, as a consequence, an organism employs its environment as a way to offload cognitive load. An organism does not remember or represent everything, it leaves a lot of that to the environment. What is learned is affordances, that is the only information that is useful and it uses that information in conjunction with what it observes in the environment to predict its next action.

Nervous systems reproduce adaptive relationships. As a starting point, we determined that all biological life is autonomous and that autonomy leads to its own adaptability. That is, even the simplest of single-cell organisms have built-in autonomy and adaptiveness. In the diagram above, in the intersection between memory and reality, the same adaptiveness that is pervasive in biological life is actually simulated in biological brains.

Social agreement is primary objectivity. The intersection of knowledge and reality can be understood with the framework of Semiotics. The gist of the argument is that knowledge is captured by icons, indexes, and symbols and that our cognitive development needs to be grounded by icons. Indexes are leaned affordances. Symbols arise from the use of words that originate from their use. I’ve explored this in more detail in Deep Learning and Semiotics.

Intelligence resides in observed conversations. The most advanced form of intelligence is one that gains knowledge through conversations. The gist of this is that our human complete intelligence arises from our ability to manage conversations within a social environment. However, the concept of a conversation can represent the dynamic interplay of interactions between organisms. The ability to track these interactions and arrive at predictions is the highest form of general intelligence. I’ve explored this more extensively in Conversational Cognition.

Why then is this Cybernetic perspective better as compared to the conventional AI perspective depicted on the right? The primary difference is that AI seems to ignore the holistic nature of organisms and ecosystems. Everything in AI is framed from a mechanistic and objective point of view where there are absolutes, information manipulation, information storage, formal ontologies, and strict boundaries. The thinking is that intelligence can be independent of the environment or context. These are all artifacts of GOFAI thinking, but unfortunately, it has infected connectionist thinking.

Second order cybernetics which introduces the observer into its discourse provides a richer foundation to understand learning as compared to the disembodied and context-free viewpoint of classical AI. In fact, this second order notion maps cleanly to ideas found in meta-learning. Deep Learning advances reveal a viewpoint that is more compatible with what’s found in Cybernetics. This should not be a surprise, after all, Cybernetics is inspired by biology and both are the inspirations of the artificial neuron.

Haidt’s moral intuitions argue that our sense of morality is intuitive and natural and explains the difficulty of persuading others through a rational argument without appealing to their personal intuitions.

The GOFAI intuition has its source in the analytic traditions found in engineering and mathematics. However, complex systems like biology and the mind are known not to be engineered (or designed) but rather grown. So there indeed is a cognitive dissonance in a maximalist engineering mindset when working with biological-scale complexity. This also explains why the Cybernetic viewpoint seems to employ language that is so alien to many in the hard sciences. This is indeed unfortunate considering that Norbert Wiener was a mathematician.

Despite Cybernetics demise as a narrative for AI, it has influenced other fields of study that involved complex systems and culture:

http://www.dubberly.com/articles/cybernetics-and-counterculture.html

Deep Learning will make accelerated progress when ideas from adjacent fields such as evolutionary biology, non-linear dynamics, and complexity theory are incorporated into the research vocabulary. It is indeed curious that Norbert Weiner’s Cybernetics book covers a rich variety of topics such as groups, statistical mechanics, communication, feedback, oscillation, gestalt, information, language, learning, self-replication, and self-organization. Perhaps required reading for any current day Deep Learning researcher.

Norbert Wiener had a deep understanding of the interplay of cognitive machines and humans that he wrote a follow-up book exploring this in greater detail:

In “The Human Use of Human Beings”, Wiener explores the same social issues that we are only beginning to collectively take seriously today. Norbert writes (68 years ago) that there is danger in trusting decisions to automation and be unlikely to identify with human values which are not purely utilitarian. We are unfortunately just beginning to realize the harmful effects of misalignment of automation (i.e. governments, corporations, internet, and AI) with human values. Cybernetics has always emphasized the interaction of humans and machines, thus Deep Learning practitioners can discover ideas that reach beyond the conventional technical horizon.

Further Reading

Contemporary Cybernetics and Its Facets of Cognitive Informatics and Computational Intelligence

Navigating the Affordance Landscape: Feedback Control as a Process Model of Behavior and Cognition

Exploit Deep Learning: The Deep Learning AI Playbook

--

--