Intuition Machine
Published in

Intuition Machine

On the Teaching of Near Future Emergent Intelligences

Photo by JESHOOTS.COM on Unsplash

The embrace of continuous variables in our formalization of nature implies infinite information. It is analogous to the notion of aether, that there is always some medium that is the substrate for the propagation of discrete particles.

Continuiousness is an emergent property of the measure of discrete interactions at different scales. There is no aether and there are no infinite precision variables.

When we examine chemistry, all the energy jumps between the electron shells of an atomic are discrete. When we examine biology, all energy is released in discrete form in the breaking of ATP.

When examining the brain, we see only neuronal spiking behavior. We don’t see continuous dynamics like fluid flow. But just like fluids are a consequence of discrete interactions of molecules, the behavior of the brain is an emergent phenomenon.

Artificial Neural Networks are modeled using continuous dynamics and do a surprisingly good job of demonstrating System 1 (i.e. intuition) behavior. The ensemble behavior of ANNs is perhaps also the behavior of spiking neurons at scale.

However, this is not how computers work. It makes no sense to sample the ensemble behavior of what goes on inside a computer to predict its behavior. Computer technology is built on different design principles as biological systems.

Of course, I’m still perplexed as to why ensemble models that are found in physics are so effective in modeling the bulk behavior of brains.

Practitioners of connectionism do not find it odd because they begin from the belief that an artificial neuron is a good approximation of a real biological neuron. But this assumption isn’t true. So it is very odd that it works so well.

The best models we have of biological brains are artificial neural networks that are found in deep learning. A lot of neuroscientists will disagree. But this is like questioning Navier-Stokes as a valid model for modeling fluids.

All models are wrong, but some are useful. Deep learning networks are unexpectedly very useful. Neuroscientists may argue all they want about biological plausibility, but we cannot deny the useful ensemble behavior that these networks exhibit.

The ensemble behavior of neural networks is not like the behavior of physical systems. This is despite its common analytic underpinnings. Physical systems obey conservation rules, virtual systems like ANNs do not.

This doesn’t prevent researchers from conjuring up some BS conservation rules like probability to lend some ‘formal’ analysis of the systems they create. But it’s all smoke and mirrors to give the illusion of control of what they are creating.

Just as there are no conservation rules in software programs that we make, there are no conservation rules in virtual neural networks. Indeed we can inject conservation rules to control behavior, like how central banks control money.

Which in deep learning parlance is known as euphemism known as “regularization.” A programmer can control in deep learning by regularization and regularization, both explicit and implicit (i.e. SGD).

But it is more like tending to a bonsai tree than it is actual engineering. People don’t program neural networks, they tend to them like people tend a garden. It’s gardening and not programming.

The design and development of these complex behavioral systems are very strange indeed. As we invent even more capable neural networks, it will continue to get stranger. The prompt design in GPT-3 is an example of where this is heading. (see: Why GPT-3 feels like Programming)

One can make the argument that it is more like teaching than it is like programming. (see: Deep Teaching: The Sexiest Job of the Future)

What makes for a good teacher is the ability to understand the mind of the student. But what does it mean to understand the mind of an artificial neural network? Will competence in this skill be more like an art than engineering?

It is commonly believed that being able to draw is a talent. This is not true. Drawing is the same as learning a language. It is just language with a different vocabulary and like language, one only becomes fluent through practice. Drawing is learned through practice.

In the same way, these “deep teachers” of artificial networks will discover the unique vocabulary of the systems that they work with. In the beginning, it will just be tacit knowledge, but over time these will coalesce into recurring design patterns. deeplearningpatterns.com

There are always many layers in a technology stack that one can play in. The software revolution required chip designers, os developers, network protocol design, database design, uix etc. It will be the same for these new AI systems that we are developing.

As we accelerate towards this new future, you have to ask yourselves as to where in the emerging stack you want to play. (see: Why Deep Learning Needs Standards for Industrialization)

If we spread ourselves too thin, then we become a jack of all trades but a master of none. The one thing that is scarce is our attention. So focus that attention wisely!

I kind of like this deep teaching metaphor, and this is perhaps my focus.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store