Brilliant People, Brilliant Machines: Part 1 — Table Stakes for Humans

Travis Dirks, PhD
XLabs.ai
Published in
9 min readOct 8, 2017

Small step for Humans, Giant Leap for Machines

This is the first installment of an ongoing series in which we seek inspiration for improved AI by studying strategies from humanity’s brilliant minds. For the introduction to the series see here.

The human being is the most fearsome technology in known existence. We can often accomplish much more by simply giving our human hardware what it asks and expects: the base layer for brilliance. This includes things everyone knows and yet, almost no one does well. In fact, these table stakes run counter to most people’s intuitive beliefs about our Ritalin-rigged, barely sleeping societal “winners”. Table stakes for human wellness include: optimal sleep, proper breathing, intermittent naps, eating well, walks, time in nature, quality relationships and chunks of focus time for flow states. It almost doesn’t matter what your goal is, improving these will improve your mental and physical abilities. For a well referenced case I would recommend checking out Rest.

The human being is the most fearsome technology in known existence.

What can we take from these aspects of human cognition in our efforts to build intelligent machines? For those in the field this may feel like deja vu, but from a list of what appears to be the simplest of human actions — Eat, Sleep, Move, Play — we can find evidence for the incredible sophistication of the human machine. Let’s tackle two of these basal human needs and compare to machines: Move and Sleep.

Walk, Play, Hike, Swim, Run, Lift Heavy Things — Why Does Movement Matter?

Energy Management

Humans: One of the most striking aspects of the brain is its energy requirements. The brain comprises 2% of our entire mass, but burns 20% of our energy. In a system like that you are bound to see optimizations for energy requirements! One optimization, among several, is movement-powered boost. Much like cars that have more electricity to power devices when the alternator is engaged, our brain has more energy when we are moving. With our blood circulating faster, and our lymphatic system driven by our muscular effort, more resources are available to the brain and more waste is carried away. For humans to take advantage of movement-powered boost, we must move in ways that provide more resources, but doesn’t drain us from the physical effort. Generally anywhere from walking to a comfortable aerobic activity seems to be best.

Machines: For computers energy management arises from a trade off: a balance between the need for more power/speed with that of not dying in a melty silicon-glass puddle from overheating. As humans it’s much harder to drive ourselves to a fiery death from thinking (unless it’s caused from other humans who don’t like our thoughts). Our brains will convince us that we absolutely must eat a full chocolate cake, when it’s low on glucose for example. This from the same brain that berated us 15 minutes ago about eating better to lose weight. If all else fails our brain will simply put us to sleep. We can see similar efforts at energy management programs that manage intentionally overclocked processors.

Creative Problem Solving

Humans: Walking has also been shown to dramatically increase one’s ability to think creatively, which is associated with broader brain activation. Movement causes neural activity throughout several key areas of the brain. This in turn likely lowers the energy barriers to facilitate synaptic firing in adjacent areas, a phenomenon called potentiation. This effect is similar to one of Sherrington’s laws which refers to the nervous system. It’s also the same underlying effect leveraged in Post Activation Potentiation training, an advanced strength training technique. More accurately we might call it spacio-temporal potentiation, since the effect drops drastically with distance(withing the neural network) and time(from the stimulating event).

Machines: We don’t have an analog for this spacio-temporal potentiation in AI or the underlying processors, yet, unless it is explicitly coded in. Coding a similar potentiation effect in neural net models would be rather straightforward though I speculate that it would likely be useful in more complicated models with different and multiple training/use timescales along the lines of a persisting/multi-skilled AI.

Flip Side of Flow State: Day Dreaming and the Default Mode Network

Humans: Much has been written recently about the benefits of flow — the state in which we follow deep attention into an I/monologe-less state of doing. The state of flow has a fascinatingly distributed pattern of brain activity that appears to draw on diverse areas of the brain to act without hesitation in the now. You know you were in flow when it feels like minutes have gone by coding or writing, but the clock says it was hours and so does the output.

The Default Mode Network (DMN) is in some ways the opposite of flow. It is a much more common and easier-to-get-to-state, that, like flow, involves the shutting down of the I/inner monologue and a loss of time. The default mode network turns on in the blink of an eye when your mind starts to wander. Where flow seems to leverage the subconscious by a violent wresting of the controls from our hands, the DMN seems to let the subconscious do it’s work by sending our conscious mind off to play in some pleasant corner where we can’t hurt ourselves.

The DMN appears to be what our brain is doing right before Eureka. So what is it doing and how can we teach machines to do it? When we have the capability of answering these questions AI will leap ahead.

Default mode network connectivity. This image shows main regions of the default mode network (yellow) and connectivity between the regions color-coded by structural traversing direction (xyz -> rgb). 1

Until we can figure out the above question, here is what we can infer from available evidence: it appears that, along with a lot of important background processing to simply run our body, the DMN also leverages a wide distribution of areas in our brain from which it seems to generate potential solutions to our problems. Different areas of our brain specialize in certain types of processing. For this reason, I believe DMN processing is more similar to trying a wide variety of models and inputs than it is to deeply training a single model. It is believed that these solutions are then run though a more linear/logical but still subconscious filter before presenting us with the best solutions consciously. This is why Eureka moments have such a strange feeling of sudden revealed knowledge. They seem to appear without effort because all the work was done when we were off in our safe corner letting the brain do its work.

Machines: How could we implement this loose understanding in the current AI context? Possibly the first Sort-of AI DMN was the Seti project, in it’s determined, distributed use of undirected processor cycles. In a supervised learning context we actually built a DMN before we’d been exposed to the concept at our last company Seldn. Unlike most pipelines which are built around a single target with human curated inputs, the Seldn Engine swallowed any data we could hand it and attempted to understand the entire world of it’s data. That is it treated everything as a target, with everything else as an input and drove its attention towards the targets and features with which it had some success. (This success driven attention is, perhaps unintentionally the way that the scientific literature advances as well.) The first version of the Seldn engine would choose a target, choose a model from the wide array of supervised models available and attempt to fit and validate. It would repeat this until all the target/model pairs had been tried and report the few that passed our criteria to our consciousness who we called “Dave”. A more sophisticated version would also choose semi-random/evolved sets of inputs from the same target, and add semi successful models to the list potential inputs across all targets.

A true AI DMN would implement the Seldn stack in a Seti style, leveraging unused processors across machines, whenever their is enough spare compute to turn on.

Feedback between the brain’s different systems

Here is an interesting one for which I have no current ideas for porting to machines. Much of our emotional state is relayed to us in the tensioning of all our various muscles, which we can feel proprioceptively much better when moving. It may be that through movement we experience a loose feedback mechanism by which the emotional state of the brain feeds into our unconscious processing of the problem at hand. This is but one example of Embodied Cognition! It may well be that to build better AI we must learn from more than the brain. We must learn from the body. Further it may be that the spacial/temporal locality enforced by a body is a key ingredient.

Rest, Sleep, Nap — Why is it so important to shut down the consciousness app?

Resource Allocation/Computation Reserve Management

Humans: We sleep for many indirect reasons to do with the upkeep of your body and brain, but also directly in the processing of our problems. To really think deeply the slick user interface that is you, must shut down. That’s right, another way our brain manages energy requirements is to put us to sleep. Consciousness is expensive, with our app shut down it can bring the brunt of its resources to bare on a problem. But what is it doing and how?

A fascinating and terrifying experiment done on our friend the rat shed some useful light. The top of a rat’s skull was removed to apply a semi permanent measuring device to the brain that provided spatial resolution of the brain’s activity. The rat was then run through a difficult maze, presumably with a helpful human assistant to keep the brain-wires dangling from it’s open skull to bank of electronics out of the way, as this was not a wireless system. The machine recorded it’s brain activity and correlating it with it’s position in the maze. Finally our exhausted, questing, topless rat was allowed to sleep and it’s brain activity was observed during the various cycles of sleep. As you might expect from the big lead up, something fascinating was observed. When the rat entered slow wave sleep — which accounts for most of the time sleeping — the rat showed bursts of activity consistent with moving through random sections of the maze roughly 20x faster than realtime. That is sequences of brain activity that had been seen in the maze were observed, but they were not in order and they happen 20x faster than had been seen in reality. Then when the rat entered REM sleep the brain activity was consistent with a realtime trip through the maze! (See Memory of Sequential Experience in the Hippocampus during Slow Wave Sleep)

Machines: What could account for this? It looks a lot like a clever machine learning technique. We have training — a low resolution/rough/fast search for solutions to key points in the data which take a long time. This looks a lot like an intelligent search through a massive parameter space for potential solutions. Training is followed by validation, or REM sleep in which we have a high resolution simulation of the full dataset to test promising solutions! While we sleep the brain is clearly working on our problems — apparently with a rather sophisticated combination of fast low res generation of possible solutions and real-time high-res simulation of promising solutions.

Wide, Low Resolution Search, Followed by High Resolution Validation

We might implement something similar by adding to the Seldn-ish DMN described an additional step at the beginning, which grabs a small subset of the data (features and examples) so that more target/model/feature combinations can be explored. Then a “pre-consciousness” step that runs the promising candidates through a full fit and validation before passing the results on for conscious consideration to “Dave”.

In conclusion, humans are the best thinkers we know. As we continue to build Ai and Machine learning it pays to keep coming back to how we think and asking how we can port those techniques over to machines. What else can we learn? And how can we port that over to silicon?

Next time, we’ll explore the process that brilliant people go through as they first grapple with the possibility of an understanding.

--

--

Travis Dirks, PhD
XLabs.ai

Enterpriser, Physicist, Investor, Founder at XLabs.AI and Always Ascending