Sustainable Deep Learning Architectures require Manageability

Carlos E. Perez
Intuition Machine
Published in
4 min readApr 19, 2018
Photo by Ruben Mishchuk on Unsplash

Every complex technology requires manageability to be economically sustainable.

— Carlos E. Perez (from defunct Manageability blog)

This is a very important consideration that is often overlooked by many in the field of Artificial Intelligence (AI). I suspect there are very few academic researchers who understand this aspect. The work performed in academe is distinctly different from the work required to make a product that is sustainable and economically viable. It is the difference between computer code that is written to demonstrate a new discovery and code that is written to support the operations of a company. The former kind turns to be exploratory and throw away while the latter kind tends to be exploitive and requires sustainability. These two kinds are intrinsically two ends of the spectrum.

There are many areas where it is very clear as to how AI can be of great benefit. However, we need to pause and consider the manageability of a proposed system. In a messy world of ever-changing needs and of systems that always fail, how then shall we build systems where the value to us far exceeds the cost of maintaining them. We will fail in deploying AI if we have little understanding of how to scale their deployment. AI is in dire need of operating environments where technical debt is contained and innovation is allowed to flourish unconstrained.

In a previous post, I had explored how Uber and Google have built Deep Learning systems to manage technical debt. I also previously explored an appealing architecture that is biologically inspired. This involves the consideration of capabilities such as redundancy, heterogeneity, modularity, adaptation, prudence, and embeddedness. These are ideas that are conventionally outside the vocabulary of current machine learning experts. Clearly, there is a need to understand how much larger (and economically viable) AI-driven system is to be built.

Michael Jordan (Michael Jordan) recently wrote “AI — The Revolution Hasn’t Happened Yet” that the principles for designing “planetary-scale inference-and-decision-making systems” isn’t a well-understood discipline. It’s hard to design something of this complexity when that something doesn’t exist. We are all making it up as we move forward!

Where can we find the design inspiration to minimize our mistakes? Michael Jordan provides a hint to a forgotten discipline known as Cybernetics. As Jordan rightly astutely observes:

Wiener’s intellectual agenda [Cybernetics] that has come to dominate in the current era, under the banner of McCarthy’s terminology [Artificial Intelligence].

It is one tragedy of our civilization that those who conjure up the more appealing label receives a lion share of the credit. The value of branding should not be dismissed even in the supposed meritocracy we imagine we have in the sciences.

Cybernetics studies the interplay of intelligent systems: both human and machine. This differs from the goals of Artificial Intelligence which focuses on the attainment of human capable general intelligence. The majority of excitement and research funding is focused on the nuts and bolts of creating more capable cognitive machinery. Unfortunately, cybernetics and the study of developing better human and machine collaborated remains grossly underfunded. Human-in-the-loop systems are today addressed by a few in UX community however this leads to an extremely narrow and incomplete perspective.

The recent Facebook and Cambridge Analytica fiasco and the manufacturing failures at Tesla are testaments to the intrinsic complexity of these systems. We simply all lack the cognitive framework to properly manage this complexity. It goes beyond just having a handle on relevancy, provenance, and reliability that we find in today’s conventional information systems. The failures of Facebook involve privacy and security. The failures of Tesla involve properly balancing human and robotic work. In both instances, it involves the inclusion of humans as a variable in the complete equation. Unfortunately, we are all at a loss of the shape of this equation.

One may divide the concerns of AI-driven architectures into two areas. Michael Jordan labels this as Intelligent Augmentation (IA) and Intelligent Infrastructure (II). IA, designing systems to handle the cognitive load of workers is a topic I discuss in detail in The Deep Learning AI Playbook. However, II involves the design and development of sustainable infrastructure. An intuitive understanding of the difficulties of this area can be best depicted by the “AV Problem”. That is the audio-visual problem that we can’t seem to solve every time we have a presentation. Countless hours have been wasted every single day trying to interface a computer to the audio-visual system. It is a problem that seems to be simple enough, yet it seems to defy a solution. Solve the AV problem and you have made a gigantic step in solving the II problem.

.

Exploit Deep Learning: The Deep Learning AI Playbook

--

--