Your Personal Sim: Pt 4 — Deep Agents (Deep Learning and Natural Intelligence)

The Brave New World of Smart Agents and their Data

A Multi-Part Series

Part 1 — Your Attention Please
Part 2 — Why Agents Matter
Part 3 — The Agent Environment
Part 4 — Deep Agents (this post)

More soon… 
For series email reminders, enter your email address at ForesightU.com.

Why will our smart agents and sims soon become as indispensible as the web and our smartphones are today? Why will most of us be joking — and some of us seriously thinking — that our sims are “our better selves” in 2040, and perhaps for a few of us, even 2030? To understand this key aspect of our global future, our next two posts will take a deep look at deep learning, a new paradigm of not only machine learning, but of future computer development.

This will be a long post, as it is about the technology behind the greatest story of our collective future, the advent of machines that think and feel like us, so I make no apologies for its length. Plenty of people will write the short versions. But there are many doubts and misconceptions on these topics, so the length will hopefully clear up a few of both.

There are also some rewards at the end, to make up for this post’s length. The first reward is the “Mind Meld” (aka, “Merging With our Sims”, or “Slippery Singularity”) prediction. This is a big reward, as it explores how humanity will use deep learning to solve the greatest tragedy presently inflicting our planet — the inevitable death of each and every one of us, due to the disposable nature of human biology. I think this mind meld future is inevitable, and when it comes later this century, hundreds of millions of us, at the very least, will use it to move easily into postbiology as our biological bodies age and die.

Once mind melding happens at scale, and we see that it works, cultures everywhere will stop pretending that human mental death is a good thing, and we’ll upgrade our religious faiths (which are fundamental to many people’s spiritual search, and will never go away) to be consistent with a new world of perpetual lifespan, for all who desire it. The second reward is more prosaic, some powerful investment tips you can implement today in the calls to action at the end.

Schwab (2016)

It was nice to see Klaus Schwab, Chairman of the World Economic Forum, promote acceleration-awareness in his Fourth Industrial Revolution theme at Davos 2016, and in his new book, The Fourth Industrial Revolution (2016). But this message is likely to continue to be ignored by most of the folks who should get it, for the time being. Most politicians, policy, institutional and corporate leaders are still stuck in old ways of thinking, almost entirely ignoring exponential megatrends. They are definitely ignoring deep learning, and lacking any understanding of its major implications on their strategy, partnering, R&D, operations, marketing, business development, and corporate foresight work. But the longer they wait, the worse their competitive position will be.

Like any exponentially improving process, deep learning’s growth will start out looking slow, then suddenly it will overwhelm us. For a great intro to the power of exponentials, see Adrian Paenza’s Ted Ed lesson, How folding paper can get you to the moon (2012). Fold an ordinary piece of paper twenty-three times, and you get to the top of Big Ben. That’s already surprising. Fold it twenty-two more times, you get to the Moon. That’s become difficult to imagine. Fold it fifty-five more times (just 100 total folds, or informational doublings), and you’re now eight billion light years away from Earth. That’s almost impossible to imagine. But that’s how exponentials run. So knowing where they operate most on Earth (infotech and nanotech), and when they’ll end, has become critically strategically important.

For about five years now to insiders, and three years for everyone, it has been clear that deep learning, which uses reinforcement-based hierarchical neural networks and other variations of brain-inspired computing, will increasingly take over the field of machine learning, and perhaps in its next feat, the entire field of high-end computer design. With these approaches, code (and with computer design, the circuits) are grown, trained, and tested. It is not built by humans. While we have a basic logical and mathematical understanding of the inputs, we can’t understand or describe the algorithms or connectivity that emerges. As with our own brains, its complexity exceeds our minds.

The people who make these deep learners will increasingly not be doing what Sam Arbesman, author of the great new book Overcomplicated: Technology at the Limits of Comprehension (2016) calls “physics thinking”, where math, logic, rationality, and engineering dominate. Instead, they’ll be doing “biological thinking”. Beyond their basic architecture, the vast majority of the evolved internal complexity and connectivity of these deep learners won’t be describable by human-understood science. Physics, math, probability theory, logic, and other tools of our limited rationality will fail to explain their higher features. What we’ll have instead is what we have in biology — a bunch of high-level, grossly useful models and analogies, a great set of observations of where these work and where they don’t, and a lot of practical experience and intuition with what kind of data, training methods, and selection environments have been best, so far, at creating and improving their performance and intelligence.

Recall NVIDIA’s work on self-driving cars, mentioned in our first post. Central to their approach is a toaster-sized supercomputer sitting in the car, the Drive PX, running a neural network that does computer vision, and which talks to a second neural net in the cloud, DriveNet, both of which “see” the world in a way very similar to the way human brains see (more on that later in this post). These learning networks are grown and trained like babies, not coded by humans. This brain-like approach in software and eventually in hardware will migrate to our industrial and domestic robots, and be at the heart of all our most complex systems.

The Next Chapter of Machine Intelligence

Nice overview of the next paradigm in machine learning. Floreano & Mattiusi (Eds.) (2008)

The next chapter in machine intelligence will involve what is called biologically-inspired computing. See Floreano & Mattiusi’s Bio-Inspired AI (2008) for a good older overview. When we borrow deeply from biology to guide our hardware and software design, we take advantage of the only known methods of making increasingly self-improving technology — the methods that led to our own emergence. Like biology itself, bio-inspired methods are mostly bottom-up and self-directed, rather than top-down, human-directed, or engineered. A mostly bottom up, and slightly top-down approach is how our own genes work in living systems, as the field of evo-devo biology demonstrates.

This post makes a case that a very bottom-up and user-involved approach to agent and sim development, with key roles for open source, open data, modularity, and mass user testing and training, will allow us to make our software and computers more like our biology, and in the process, achieve our best individual and collective futures.

“Artificial” intelligence (AI) is a good description of where computer intelligence sits today. This intelligence is human-constructed, simplistic, and brittle. It feels as natural as a building, which can’t adapt beyond its design, and begins decaying as soon as humans stop repairing it. A single bit out of place in a configuration file in many current software systems can cause total system failure. We program most of today’s computers top-down, using rational, logical, engineered approaches. They aren’t yet autopoetic, or capable of self-replication and adaptation.

Dasgupta & Nino (2008)

But they will be. They’ll be not only robust to error, but antifragile. That means they have not just security, but immune systems, which learn from catastrophe and error. Catastrophes and errors actually make antifragile systems stronger, just as dirt and infections strengthen our biological immune systems. Rather than being “built”, it’s better to say they’ll be “seeded”, grown, and trained. Folks like Dipankar Dasgupta have been researching artificial immune systems (AIS) for twenty years, with little recognition by mainstream computer science. Here’s his latest book. I am convinced that the better we understand neuroimmunology, the better we’ll realize that the combination of bio-inspired computers and technological immune systems are the only reliable and proven path to real security, in both biology and technology.

We will address NI safety in our post on Safe Agents. If naturally intelligent (NI) machines prove to be rapidly self-correcting and antifragile after bad things happen with them, just as living systems naturally are, and unlike almost all of today’s AI machines, it seems clear that we as a society will continue to build and use them to solve our pressing human problems.

Self-improving, antifragile intelligence is so different from today’s artificial intelligence it deserves a new name. So let’s call it “natural” intelligence (NI), and recognize that it must be deeply biologically-inspired. Again, bio-inspired machines aren’t coded and designed, but rather are grown, gardened, and tested by us, against big data and the world. They have the equivalent of both brains and immune systems, and an ever-growing ability to self-explore, self-repair, and self-improve.

The symbolic, rule-based, top-down, engineered, and human-comprehensible approaches to AI, which have delivered modest progress for fifty years, are just a small part of the human brain. You can be sure they’ll also be a small part of the machine brains to come. Our systems, software and computer designers will keep sliding toward naturally intelligent machines because working with them, once they reach a threshold level of intelligence and self-improvement ability, will be far more efficient and effective than continuing to design top-down, using the old paradigms.

We can also call bio-inspired computer hardware and software design natural computing”, to distinguish it from the engineered, discrete, serial, rule-based, “nonbiological computing” that we still use in the vast majority of our IT systems. We saw the earliest signs of natural computing in first crude neural networks, Frank Rosenblatt’s perceptron, in 1957. But the perceptron didn’t have a good training algorithm, so this kind of computing made little progress for thirty years. A good training algorithm, backpropagation was invented by Geoff Hinton and others in 1986, and neural nets began to make progress after that. But natural computing had to wait another twenty years, for fast processors with good hardware parallelism and access to data. All told, it took fifty years for neural networks to become an overnight success.

Source: Naver D2 (DEVIEW 2015)

Natural computing’s successes began in earnest around 2005, as we will see. By 2009, nonbiological approaches to machine learning began losing out to biological approaches. Natural computing includes minimally biologically-similar hardware, like NVIDIA’s Pascal (optimized for running neural net software, but not yet deeply biological), and more strongly biological hardware like IBM’s SyNAPSE and other neuromorphic chips, and a wide array of biologically-similar machine learning software and algorithms, like recurrent and convolutional neural nets, reinforcement learning, hierarchies, modularity, swarm intelligence, evolutionary developmental methods, and much more.

Bio-inspired computing methods includes biomimicry (biomimetics), the imitation of models, systems, and elements of nature to solve human problems, described well in Janine Benyus’s Biomimicry (2002). But they also take us beyond biology, which the word biomimicry doesn’t convey. Naturally intelligent computers will do things biology can’t, at speeds biological brains will never reach. They will learn to replicate, and generate their own adaptive complexity and intelligence, far faster and more stably than we ever could.

Our naturally intelligent sims will help us with many things, as this series seeks to address. But of everything our sims can and will help us with, thinking about how they will advance evidence-based thinking and collaborative scientific and technological research, and where that will take us, is perhaps the most exciting of all our opportunities ahead. Demis Hassabis, CEO of Google DeepMind makes that point in this lovely 14 min video at Falling Walls 2015, which is well worth a watch.

Even though deep learning systems are nowhere near as complex yet as biological brains, they will keep learning and operating at least seven million times faster than biological brains, which are limited by electrochemical rather than electrical communication speeds. So it won’t be that much longer before they “learn their way up” to our level of complexity. In fact, this NI future seems so useful and powerful, I predict future science will show it is a developmental outcome that emerges on all technological planets, an “attractor” that humanity cannot avoid.

Many people currently talking about machine intelligence are still missing the increasingly bio-inspired, bottom up, and evolutionary developmental (evo devo) nature of the new generation of machines. They still think in terms of the top-down, rationalist, engineered way that most machine intelligence has emerged to date. But that top-down approach depends on our slow and limited biological human minds to grow it, and has far less potential than the bottom-up, self-replicating methods now emerging.

Top-down, rational design schemes for creating machine ethics and engineering “safe AI” in our sims and robots will always be very limited in usefulness, in a world of increasingly bottom-up NI systems. Even in today’s rationally engineered computing environments, all our leading computer science algorithms and data structures are actually not fully rational, they are rationality-guided but computationally incomplete guesses at how to represent the world in a useful way. Logic, rationality, probability theory, and other top-down tools let us make better guesses, but they are still just guesses

Most fundamentally, all most complex things in the world, including life and minds, are both evolutionary and developmental. That means that they are almost entirely bottom-up, experimental systems (evolutionary) with a just few empirically-found rules for top-down, systemic guidance (development). Evo-devo biology is precisely how the most complex organisms on our planet self-organized their own amazing complexity. Evo devo methods are how tomorrow’s smart machines and agents will emerge, as these methods alone allow computers to increasingly guide their own self-improvement.

In his beautifully-written book on machine learning, The Master Algorithm, (2015), recommended earlier as background reading, computer scientist Pedro Domingos identifies “Five Tribes of Machine Learning”. Each tribe has been successful, to some degree, in building learning computers to date. Domingos’ Five Tribes, and in parentheses, the current favorite algorithms used by each, are:

1. Bayesianism (probabilistic inference)
2. Evolutionism (genetic programming)
3. Connectionism (backpropagation)
4. Analogizers (support vector machines)
5.
Symbolists (inverse deduction)

Deep learning, which we’ll soon discuss at length in this post, is a kind of Connectionism, the Third Tribe on this list.

When we ask ourselves to write the story of life’s Intelligence Emergence Stack — the evolutionary developmental hierarchy in which intelligence emerged in living systems on Earth, there are good arguments that biology followed the order laid out above. This is not Domingo’s order in his book, as he does not (yet) view the universe from an evo devo perspective. But I for one am hopeful that one day, he will. Let’s briefly back up these claims:

  1. Bayesian Intelligence. Molecular precursors to our first cells must have used chemistry to do probabilistic inference, in replicating chemical networks, to model and react to their immediate surroundings, and to support their survival, in molecular evo devo. One good book that takes this perspective is John Campbell’s Darwin Does Physics (2015). Campbell is a scholar in our Evo Devo Universe research community.
  2. Evolutionary Intelligence. Eventually life emerged, with its cells and genes, which are both evolutionary and developmental.
  3. Connectionist Intelligence. Eventually, a special subset of dominant multicellular life built neural networks (brains).
  4. Analogical Intelligence. Eventually, the most intelligent and dominant of these animals with brains began thinking in analogies, a process that all higher animals, including crows, can do.
  5. Symbolic Intelligence. Finally, humans began their runaway partnership with technology, and evolved and developed symbolic language, and later, formal symbolic reasoning in the Enlightenment (1600–1800).

As might be expected on reflection, artificial intelligence research has emerged in the exact reverse of this order. In the 1960s’ we began working on machine intelligence using top-down, rule-based and discrete symbolic reasoning —the epitome of Arbesman’s precise yet oversimplistic “physics thinking.” That was where the easiest work could be done at first, and “Artificial” was a great word to describe this entire process. Symbolic strategies made lots of early progress, and were greatly overhyped by some, but anyone with a biology background had little faith that they alone would create truly smart machines.

As symbolic progress slowed, we moved to support vector machines (analogizers) in the 1990s, a promising step deeper into the nature of intelligence. We also began experimenting with genetic programming and neural networks in the 1980s and 1990s, but each were still too early then to make much progress. In the early 1990s, we began making progress with Bayesian networks. Since 2009, as we’ll see below, connectionism, via deep learning, has become the latest important advance.

The dramatic recent success of deep learners and the return of connectionism marks a big transition, and I think we need new language for that transition. From now on, whenever we talk about the future of thinking machines, I we should be favoring the phrase “Natural Intelligence” over Artificial Intelligence, and begin phasing out that latter phrase, as it is increasingly irrelevant and incorrect.

That change of language can help signify, to those ready to hear it, just how momentous this shift to deep learning actually is. We’re finally working earnestly across all the layers of the stack. Our best strategy to build smarter machines, from here forward, is to try to recapitulate all the key intelligence innovations that nature has made to bring us to this point. What’s more, we will increasingly let our machines lead us in that journey, as they get ever more effective at their own natural learning.

Our machine learning community has a lot of work still to do in creating natural intelligence. Our current understanding of evolutionary developmental (evo devo) computing is quite primitive. Just like evolutionary biologists who continue to ignore evo-devo biology, all the processes of convergent evolution, and the way development controls evolutionary processes, today’s leading conferences on evolutionary computing, like GECCO, still don’t pay much attention to development. Evo devo computing, for its part, must be tied to the development, variation, and maintenance of connectionist networks in machines, just as genes guide a living brain’s neural networks. Finally, all of these tribes must be tied into Bayesianism. We need to understand why Bayesian methods led inevitably to the kinds of intelligences that life uses. Computational neuroscientists have built early Bayesian models of brain functions, and biologists use Bayesian networks to discover gene associations, but it will be a while before we understand evo devo systems in Bayesian terms. All this will be needed to create deeply naturally intelligent machines, and the technological singularity, in my opinion.

Hox genes. A fundamental component of evo-devo biology. Self-improving (autopoetic) machines will need something like these, to become deeply bio-inspired.

At present, a tiny but rapidly growing number of computer scientists now train and guide, rather than program and engineer, the new deep learning systems that are driving cars, and acting as the cloud-based “brains” behind our current smartphone agents. Most computer science will be done this way in the years ahead. Large numbers of computer scientists and users will be experimenting with and training, far more than designing or programming, tomorrow’s leading sims. For the future of NI, bet on evo devo, which is 95% bottom up, not rational design, or other top-down approaches. And bet on evo devo machines and the environment doing the “programming,” not human brains.

Evo Devo Universe (Smart 2008)

The growth of life and mind has always been a lot of evolutionary trial and error balanced by a small amount of slightly improved developmental processes, in each replication cycle. So too it seems likely to be with tomorrow’s computers. For more on that perspective, see my book precis, Evo Devo Universe (2008) and our interdisciplinary research community EvoDevoUniverse.com.

When we view the world from the wrong frameworks, life has a way of showing us our mistakes. I am hopeful that deep learning’s continued juggernaut in the machine learning space will make the many currently top-down, rationalist philosophers of AI understand the unique advantages of applying the evo devo paradigm to the future of technology. We shall see.

So we now have a rough roadmap for how the much-vaunted “technological singularity” will arrive, later this century. In fact, it is no longer a “singularity,” a point at which our models and foresight breaks down, but rather a rapidly approaching and natural transition that many of us now expect. So let’s call it a predictable phase transition in natural intelligence (NI), not a singularity, and bring it into the realm of hypothesis and science.

Why Neural Networks are So Naturally Intelligent

Neural Networks are Awesomely Awesome, in at Least Three Major Ways

Let’s take a look now at neural networks, both in brains and machines, to see why they are so important to the future of postbiological intelligence.

To better understand our own natural intelligence, consider just three great advantages of neural networks (connectomes), which are at the heart of today’s deep learning machines:

First, neural networks fail gracefully when damaged by the environment, because useful information is never stored in one single place. Concepts, models, ideas, and predictions are always stored “a little bit everywhere”, represented in the number, locations, and strengths of synaptic weights. Such systems undergo what is called “graceful degradation” when damaged. As links are damaged, their performance slowly decreases, and it rarely dies all at once. In today’s artificially intelligent computers, changing one single bit in a config file can crash the whole software. Not so with natural intelligence. If a neural connection is destroyed by trauma, disease, or biochemical error, we may partially forget some aspect of the information we wanted to keep, but we can often repair and reestablish the memory by concentrating on some other aspect of the thing in question and “routing around the damage”. This is what you do when forget a person’s name but think about some other aspect of the person, until their name suddenly comes back. It’s also what you do when you walk back to the place in which you were thinking about what you wanted to do next in order to remember it, thus returning to the original net of mental associations in which you formed the idea. All human thinking and memory works in this incredible associative way.

Second, neural networks can access vast amounts of stored information in each processing step, because all information in the brain is just a few “degrees of separation” (switching circuits) away from all the other information. Our brains have neural switching speeds of roughly a thousand times a second. Electronic transistors can switch on and off billions of times a second, making them roughly seven orders of magnitude faster (10,000,000X) at this task than biological brains. But because we store information associatively, in the number and strength of connections between neurons, we can search our memory almost instantaneously to see if we already know a concept, a name, or a face. It may take just a hundred neural processing steps to scan our entire memory, for a concept, as each step has access to so much information, due to the massive parallelism of our connectome. That means, within seconds, we can say with confidence whether we know something, have a partial memory of it, or it feels fully new, at least according to our current search — of our entire brain! Conventional serial computers cannot do this. Even though they are billions of times faster, they are not parallel, or naturally intelligent. Each search step accesses so little information, that trying to search a similarly large database takes forever. They can’t make realtime, dynamic estimates of what they know and don’t know. But deep learning systems, especially hardware based ones, can do this. They remember like us.

Third, neural networks are always simultaneously comparing a vast number of parameters of anything of interest, as they both remember and think via synaptic connections. Connectomes offer the most powerful informational and computational architecture that we know to continually explore and tune a “hyperparameter space” of large numbers of potentially interacting parameters. Our associative brains are the ultimate “relational databases”, relating everything to everything else. The central problem of intelligence is always the appropriate mapping, fanning out (evolution), and pruning (development) of a mind, to best navigate the combinatorial explosions of possible representations of reality (model parameters). See Alice Zheng’s (@RainyData) “hyperparameter tuning” post for more on this “metalearning task” (something that must be done prior to actual learning).

Source: Zheng, How to Evaluate Machine Learning Models (2015)

When they are properly connected, neural networks can quickly sift and pay attention to just that small combination of parameters that seem most adaptive to the problem at hand. Associational architectures quickly “fan out” (an evolutionary process) into a vast number of possible associations, and then just as quickly “fan in”, or prune (a developmental process) to just the information that they think is still worth attending to, and this process is how we make predictions. This ability to continually fan out and fan back in, while simultaneously comparing a vast number of competing information sources to form an intuition, a model, a prediction, or a plan, is an evo devo process that allows us to elegantly manage a torrent of incoming information, and simultaneously compare thousands of potentially relevant parameters in the world. Again, conventional computers can’t do this. But deep learning systems are learning how, which means they will increasingly not just remember, but also think — like us.

Neural networks aren’t perfect. Whether biological or technological, they can and do eventually become overtrained. But we can get out of that trap by rejuvenating them, opening new connection space, and retraining on new data. We are a long way from figuring out how to do that with human biology, but we are already learning how to do that renewal with many of our deep learning machines. Today’s artificial neural networks also are not “compositional”, meaning they don’t yet know how to combine different pieces of information sequentially, in different ways, to do chains of thinking, following sequential rules. So the symbolic processing that today’s computers can do very well, and humans can do to a limited degree, needs to emerge in the deep learning networks of the future, to move them fully into natural intelligence. But we’ll get there, by better understanding our biology.

So as neuroscience keeps advancing, we’ll keep using all the brain’s neural network structure and algorithms that we can copy, algorithms we will likely never fully understand, and create experimental versions of them in our hardware and software. We’ll train those neural networks with data and our feedback, not program them. Those systems will in turn themselves run vast numbers of new experiments, in their reconfigurable hardware, in their software, and in the way they interact in the world. Many of those experiments, of course, will be initiated by our sims and agents, and run on us, and the world. They’ll learn just like a baby learns, with progress and failures too, but constantly getting better by trial and error.

Nature, on DeepMind Learning to Play Video Games, Feb 2015.

In a famous recent example, Google DeepMind’s deep learning network learned by itself how to play 49 video games from the Atari 2600, with no human training, in Feb 2015. It was immediately better than the best human players on 23 of these games, and in a few games, like Breakout, it uncovered optimal play strategies that humans didn’t realize were available.

As mediated reality grows (our last post) deep learning-backed software agents will be able to learn even faster from many virtual realities than from physical reality, once enough data and accuracy are in the simulation. They’ll continually take their most useful virtual learning back into the physical world. Learning is particularly rapid in virtual space because more iterations can be tried faster, as long as computational power and simulation detail are sufficient, with no risk to physical life, and with much less need for physical resources.

Inceptionism — Visual Imagination in Neural Network

Biological neural networks do this virtual world simulation constantly already. It’s called dreaming, and imagination. So do deep learners now. See the dramatic visual examples of “inceptionism” by Mordvintsev et al. at Google for how today’s deep networks can “dream” or “imagine” the world around them. I’ve got a few of these artworks on my wall now, to remind me that our most bio-inspired computers are just now learning to dream, in limited ways. It’s truly a brave new world!

Again, remember that evolution and development in electronic systems, whether hardware or software, can happen far faster than in human brains. Evolutionary pattern recognition (thinking, imagination, dreaming) runs at roughly 100 mph (the speed of neural communication) in human brains. That’s fast within a small human brain, and this speed keeps us alive in the world, but the same processes run at the speed of light inside dynamically reconfigurable hardware-based neural networks, in neuromorphic chips. That’s at least seven million times faster than human brains. So you can see where all this is going.

Deep Learning: 2005 to the Present

Let’s do a quick recent history now of the most recent star of natural computing, deep learning, to see it in broader context. Again, deep learning is a type of bio-inspired computing that uses neural networks of different varieties (hierarchical, recurrent, convolutional, goal-directed, reinforcement-driven, etc.). It is the hottest new area of machine intelligence, and like any rapidly improving area, it is easily overhyped, especially for what it can deliver in the next five years. But beyond that, all bets are off with what these systems can deliver. They’re on the path to natural intelligence.

Moore’s Law Began to Break Down in 2005 (Blue Curves)

An interesting and unconventional place to begin our deep learning story is in 2005. In that year, Moore’s law in MOS integrated circuits ran into the first of a series of endings that will increasingly move us out of its fifty-year long “magic shrinking transistor” paradigm. All exponential growth in any substrate can only run for so long, then it must jump to a new substrate. 2005 brought the end of something called Dennard scaling, which meant that chips got too hot (leaked too much current) if you shrunk them any further, so so around that year chip companies began producing multicore CPUs. The chip industry didn’t want go multicore, as no one knew how to connect multicore chips in useful ways (parallel computing). But the end of Dennard scaling forced them to start making a bunch of first-gen, weakly parallel CPUs. As miniaturization limits grow, Intel’s former Chief Architect, Bob Colwell, predicted in 2013 that Moore’s law will be totally dead within a decade.” If you care about natural intelligence, please pray for that prediction to be true! Only then will deep learning truly dominate, in both hardware and software domains, as we’ll see.

As Moore’s law was hitting its first ending in 2005, companies like NVIDIA which had had been making graphics processing units or GPUs to run video games since the mid-1990s, began realizing they were in a unique position to take a leadership position in the future of machine learning. At first, their chips used simple parallel processing in hardware and software, primarily for graphics. But as the video game industry exploded, GPUs rapidly improved their performance, with performance doubling times that were much faster than for CPUs (often doubling their performance per price every 12–16 months, instead of 18–24 months). By the late 1990’s, GPUs, not CPUs on motherboards, had become the best places to run the computationally intensive algorithms being used by the machine learning community. These simply parallel GPUs, in the graphics cards on our desktop computers, running our ever larger screens and our video games, can be thought of as Earth’s first mass-produced weakly bio-inspired hardware brains.

Thus 2005 can be argued as the the time when the chip industry began to move from “miniaturization exponentiation” into “parallelization exponentiation”, doubling the number of processors and circuits that can work together simultaneously in useful ways. Parallel exponentiation is much harder, because we humans don’t know how to best connect up parallel systems. When we were in the middle of the Moore’s law era of continually shrinking circuits, attempts to build massively parallel machines, like Danny Hillis’s impressive Connection Machine in the 1980s, unfortunately just couldn’t work. Their hardware became obsolete almost immediately after they were built. But just as importantly, we had no idea how to program those deeply parallel machines, and no incentive to do so, as we got so much more performance return by continuing to shrink standard, nonbiological, and serial Von Neumann computer architectures.

Fortunately, biology has had billions of years to make massively parallel self-improving systems, and after 2005, computer hardware and software begin to get parallel enough for us to start using bio-inspired methods. On my website in 2002, I predicted we’d need an end of Moore’s law and a rise of massive parallelism, neural nets, and bio-inspired computing to get real machine intelligence. So I’ve been gratified to see these emerge over the last decade.

Scholars who publish on exponential technology growth, in journals like Technological Forecasting & Social Change, tell us that individual exponentials always end. But if we live in a universe where nanotech and infotech are special, as I argued in Post 3 (The Agent Environment), then whenever any productive technology exponential ends, it creates technical and market opportunities for new exponentials to emerge, out of nanotech or infotech strategies that couldn’t work before. So as exponential miniaturization of digital circuits began to end in 2005, we created the first real opportunities for exponential parallelization of those circuits, and thus deep learning, to emerge. That new exponential is now the one to watch. The bottom line, for those of us who do foresight work, is be very careful to identify the appropriate exponentials relevant to our problem. They may not be the ones that most people are thinking about.

Ironically then, the beginning of the ending of Moore’s law is one of the best things that has happened to machine intelligence. As chips are stopping their magic shrinking game, it is becoming economically possible, for the first time ever, for chip companies to massively parallelize them, bringing more brainlike machines, what we can call Natural Intelligence, to the world. Artificial intelligence is top-down, human engineered machine learning. We’re moving out of that paradigm right now. Natural intelligence is bottom-up, self-guided, and deeply biologically inspired.

Natural intelligence will be the future of our most advanced CPUs and GPUs. They’ll become increasingly neuromorphic (brain-architecture inspired), like the experimental SyNAPSE chips by IBM and others, and those architectures will be controlled by technological versions of genes, hardware description languages that can evolve, and that each specify the kinds of neural network architectures that develop in each replication cycle. Again, human beings won’t program these naturally intelligent machines, as we aren’t smart enough, but I’m convinced they’ll be tomorrow’s best self-learning systems.

Let’s jump ahead now to 2009, another big year in the deep learning story. Neural networks can’t work well unless they have a lot of data to crunch, as well as machine learning professionals who believe crunching all that data will yield powerful results. In that year, Halevy, Norvig and Pereria of Google published a seminal opinion paper, The Unreasonable Effectiveness of Data, which described big progress being made in statistical, associational approaches to speech recognition, language translation, and language understanding. This widely-discussed paper was an important signal, both to machine learners and the technically literate community know just how important both statistical approaches and web scale data were becoming, and would increasingly be to the future of machine intelligence.

Also in 2009 a type of deep learning system called a Long Short-Term Memory network, developed by my friend Juergen Schmidhuber and his team at IDSIA in Switzerland, became the first deep learning system (recurrent neural network) to win an international machine learning competition, against other traditional, much less bio-inspired approaches. Their system won first for handwriting recognition (ICDAR 2009), then later for traffic sign recognition (IJCNN 2011), then for a variety of image recognition tests (ISBI and ICPR 2012). Their 2011 win was the first to achieve what Schmidhuber calls “superhuman performance” in complex visual recognition, beating humans at recognizing traffic signs in the wild.

In 2010, Kaggle, the leading predictive modelling competition platform emerged, creating a new place for data scientists to openly compete to produce the best predictive software. They’ve grown to half a million registered “Kagglers” since, and many of the world’s deep learning practitioners engage in contests and publicly share their code on Kaggle today.

In 2011 and 2012 academic teams using neural networks again won character recognition, traffic sign recognition, and medical imaging tests against other machine learning approaches. The ILSVRC 2012 ImageNet competition was perhaps the turning point event for deep learning, as neural networks were so successful discriminating images on ImageNet (a common image data set used by machine learning community) in that competition, that most machine learners then turned away from hand-built “feature engineering” toward unsupervised feature learning using deep learning. Google, Facebook, Microsoft, and other majors immediately noticed this change and began acquiring deep learning research teams and startups around the world.

By 2011, NVIDIA was also doing increasingly complex parallel hardware and software design, using their GPUs as accelerators for large financial and supercomputing clients. After 2012, inspired by deep learning’s advances, NVIDIA began to plan a major pivot of their company toward artificial intelligence, to try to sustain their manufacturing leadership position in this rapidly emerging field.

The success of deep learners entered the public consciousness in June 2012, with John Markoff’s New York Times article, How Many Computers to Identify a Cat? 16,000. This article described Andrew Ng and Jeff Dean’s team at Google, which used 16,000 processors, in a network of one billion connections, that identified cats, and other objects, from 10 million YouTube videos, using an unsupervised (autonomous) approach.

This Google Brain network is a nine layer system, only three of which are particularly complex (structures called sparse autoencoders). It could only recognize cat faces head on, while humans can recognize them in any pose. But the cat was out of the bag, so to speak :) Not just industry insiders, but techies everywhere began following the deep learning story which has been accelerating ever since.

After 2012, deep learning began working well in a variety of applications, such as auto-captioning of images, in language translation, in computer vision, and in several other fields. For an excellent window into this prolific period, see Jeremy Howard, “The Wonderful and Terrifying Implications of Computers That Can Learn,” TEDxBrussels 2014.

See also Steve Omohundro’s (@steveom) great TEDx Talk, What’s Happening With Artificial Intelligence? (2016). His second slide highlights a few of the multi-billion dollar investments we’ve seen in AI over the last three years.

Steve Omohundro, What’s Happening with AI? (2016) Slides

Let’s look at a few highlights from this most recent period. In 2014, Andrew Ng, formerly at Google, joined Baidu to build a speech recognition system entirely via deep learning. This was very ambitious, as all previous speech recognition systems had involved significant amounts of human-directed training and feature engineering. Also in that year, several companies made some huge investments in deep learning, as summarized in the slide above.

In 2015, Baidu announced their deep learning network was the first to reach superhuman performance in the recognition of short clips of speech spoken over the phone (“Baidu’s Deep-Learning System Rivals People at Speech Recognition,” Tech Review, 2015). Coincident with this, Baidu has launched a smart agent, Duer, to help smartphone users do various tasks.

Also in 2015, we saw NVIDIA’s self-driving car, a much more rapidly emerging, and more bottom up system than the mapping-based approach to self-driving cars that Google has been developing for ten years, since Sebastian Thrun’s team won the DARPA 2005 self-driving car competition. Perhaps the most amazing thing about the NVIDIA car was that it learned to reach near-human level performance over just six months in 2015. With the right hardware, software, the right problem, and good training data, these systems can rapidly gain human level proficiency (picture below).

The layers in these deep learning systems aren’t nearly as complex as the human brain yet. The human visual system, for example, is still much more elaborate. A task like face recognition in our brain begins with neural nets in the retina of your eye, then goes to midbrain relay nets called LGN, then goes to six layers of visual cortex at the back of your brain in V1, then to the six layers of V2, then V3 and V4, then to the fusiform face area (another six layer region of cortex, specialized to process faces) and then to individual cells, potentially including a grandmother cell (or small network) that recognizes only your nanny, and no one else. We have a good ways to go before our deep learning systems are as complex as this. But we will get there, surprisingly fast.

Facebook’s Yann LeCun (@ylecun) is a deep learning leader who is presently building the best face recognition solution available on the planet. It may have superhuman performance narrowly already, and it will reach achieve it broadly soon. The FBI launched a $1B face recognition project in 2012, but knowing how federal institutions contract such work, I predict it will be junk, and one of the deep learning IT leaders listed above will get there first.

If the FBI really wanted to get their solution on time and on budget, they could have done a few large parallel contracts with a variety of IT leaders, not defense contractors, and a majority of smaller incremental competitions on Kaggle, with tens of millions in education and startup bounties available for any small team or sole practitioner who deployed anything semi-competent during the competition.

That would spend a lot less for much faster, better and deeper technical and social returns. But taking a mostly bottom-up, evo devo strategy would have required their recognizing that face recognition is a tool all of us will have. They won’t be able to corner it, or even get there first. A society of Little Brothers (mass souveillance of each other, via our sims) is not only inevitable, it is far safer and more antifragile than the Big Brother (surveillance) society that some of our security leaders falsely envision.

Deep learning apps dominated NVIDIA’s GTC 2016. See this May 6th NVIDIA piece on how their engineers taught a car to drive using their Drive PX hardware and other software and lots of training data. NVIDIA shipped a board last year, the GTX Titan X, that folks can use to train neural networks on their home PCs, and they’ve got a new GPU (Pascal) and board (Tesla P100) that will be 10X faster at running deep networks, shipping next month.

In March 2016, Google DeepMind’s computer scientists and neuroscientists built a program, AlphaGo, that beat Lee Sedol, the world’s best ranked player in Go, four games out of five. Go is exponentially more complex than chess. See this lovely video for more on how a relatively small team of fifteen employees at DeepMind accomplished this amazing feat, using a blend of deep learning and reinforcement learning, and clever training and goal-development architecting for this amazing hardware and software “brain.” [Note: It also turned out, per Google CEO Sundar Pichai (@sundar_pichai) at Google I/O on May 18th, that Google built a custom ASIC chip for their deep learners, what they call a Tensor Processing Unit, which they say is ten times more efficient per watt than commercial GPUs and FPGAs. It’s great to see Google in the chip-making business for machine learning! I hope that continues.]

So we’re truly off to the races now with deep learning, and we’ll see a new generation of programmers using these increasingly biologically-inspired approaches to machine learning in the coming decade, for a vast range of uses. See Eric Siegel’s Predictive Analytics (2013) for some of the areas machine learning is already disrupting. We will see deep learning increasingly prevalent in automation and robotics of all kinds in coming years.

These successes are vindications for folks like Geoff Hinton, one of the fathers of connectionist computing’s most useful algorithm to date, backpropagation, in 1986. At that time, computers weren’t fast or parallel enough, and data sets big enough, for neural networks to deliver many human-surpassing results. Now they are, and Hinton leads a large deep learning team at Google.

They are also a vindication for technologist Jeff Hawkins, who published an influential book, On Intelligence, in 2004, arguing that a special kind of neural network, an HTM network, modeled after the human cortex, would be key to the future of machine intelligence. Hawkins and his colleague, Dileep George, now running Vicarious, made some progress with their HTM-variant networks, and they opened their platform to community use. But without the resources of a Google, Microsoft, IBM, or an NVIDIA, they couldn’t quite jump start this field at the time. They also must be smiling today.

There are now many entry-level resources for learning more about deep learning. There are tons of YouTube videos on deep learning, many on recent achievements, including cat recognizers, speech recognizers, autocaptioning, video game playing, game playing, and self-driving cars. NVIDIA has good deep learning tutorials, including Deep Learning in a Nutshell (2015). Michael Nielsen has a great free online textbook, Neural Networks and Deep Learning (2016). Presentations like A Short History of and Intro to Deep Learning, John Kaufhold (89 slides). Browse the DeepLearning.net wiki for conferences and resources. See Quora’s tags for Deep Learning, Convolutional Neural Networks, etc. Join Reddit’s Machine Learning and Deep Learning communities. For places to work or invest, see Venture Scanner’s list of nearly 1,000 AI companies. Some analysts have estimated that about a fifth of these are presently employing or developing deep learning competencies in their solutions. That percentage will obviously rise, among the future leaders.

Source: VentureScanner (2016)

Deep Learning Captures Real Neurobiology

Are deep learning neural networks really biologically inspired, or are they just a “toy model”, slightly useful but not complex enough to capture the way the brain actually works? A new paper by Yamins and DiCarlo, Using goal-driven deep learning models to understand sensory cortex, Nature Neuroscience 19:356–365, Mar 2016, puts this question to rest.

Their paper demonstrates that even today’s simple deep learners duplicate many powerful features of how neurons in human visual sensory cortex process information and predict visual images. It also gives research guidance to computer scientists and neuroscientists over next five years. The paper is behind a paywall, but here is an excerpt of the front page.

See also DiCarlo et. al.’s 2014 paper, which directly compares the representational performance for visual object recognition of DNNs (deep neural networks) to the primate brain, finding them both efficient at constructing representational spaces in which objects of the same category are close, and objects of different categories are far apart, even with large variations in the object example, position, scale, and background. This isn’t our father’s A.I., it’s natural intelligence, or N.I.

Papers like these show us that deep learners already strongly mimic how we mammals make sense of and remember the world. DNNs are likely still missing some of our basic algorithms however. We don’t really know because long-term memory encoding has not yet been fully cracked by neuroscientists to date, though we are fast closing in on the prize.

Good book on the anatomical basis of human memory (Yuste 2010)

One of the things we do know about human memory is that its most important and basic component by far is the shape and variety of synapses of the 10,000 dendritic spines (on average) that lead into every individual cortical neuron in our brains. This gross basic connectivity and synaptic weighting is crudely captured in today’s deep learners. A good book on spines, which explores how they form neural circuits and memories, is Rafael Yuste’s Dendritic Spines (2010).

In Nobel prize-worthy work published in 2014, Steve Ramirez and Xu Liu implanted a fake memory of a traumatic event, a foot shock, into a living mouse’s brain, by altering the shape of dendritic spines in their brain with an optically sensitive transgenic protein (ChR2) and laser light, in an area called the hippocampus, which stores the most recent two days of our memory, and which writes some of those short term memories to long term memory (in cortex) when we sleep. This and similar experiments have confirmed decades-old theories that our memories are stored in the architecture and connectivity of the thousands of dendritic spines that connect every one of our pyramidal neurons to each other in our brains.

These very special neurons are 80% of our 25 billion cortical neurons, and they hold all of our higher memory and personality. Curiously, the pyramidal neurons in our prefrontal cortex, where we conduct all our highest thinking and planning, have totally maxed out the number of connections they can make to other neurons. Prefrontal cortex pyramidal neurons have on average 23 times more dendritic spines than the same neurons in our primary visual cortex. There is simply no more room around these particularly helpful neurons to make more physical connections to neighboring neurons. But there will be such room in your sim’s neural network, you can be sure.

Many of the dynamic features of neural architecture still elude us. Most molecular features can likely be ignored in a first model, as they exist to keep biological cells alive, not to allow them to think or remember. But some dynamic features are central to learning and memory. They involve things like Attractor Networks (Scholarpedia article) and Neurotransmitter Field Theory (Greer & Tuceryan 2010), and it will take a while to figure them out. But with 30,000 bright neuroscientists attending the annual Society for Neuroscience meeting, and hundred of specialty neuroscience conferences, we’re getting closer to learning the full rules of neural learning and memory every year. If you want to study these topics further, or explore a career in this field, here’s a great free online textbook, Computational Cognitive Neuroscience (2014).

Source: Wikipedia, Dendritic Spines (2016)

Consider this insight about our brains that recent neuroscience work has suggested. Bourne and Harris (2007) tell us that in human brains, roughly 65% of our spines are ‘thin’, 25% are ‘mushroom’ spines, and the remaining 10% are stubby, branched, or other ‘immature’ forms. See the picture at left for the different shapes. They propose that thin spines are what we do our thinking with, interpreting our sensory data and relating it to our memories and motor outputs, and mushroom spines are where we store our stable long-term memories. If this educated guess proves true, it will turn out that about one quarter of the connections in human cortex are dedicated to memory storage, and the rest dedicated to thinking, about our outside world, and our own memories. That would make us each 75% thinking, and 25% memory machines. Pretty neat, huh?

Open and Massively Bottom-Up Software Design: Conversational Coders

So what does agent and sim development look like as deep learning grows? What is the next big step toward the “singularity”? Let me offer a rough vision.

Github, a Facebook for programmers, launched in 2008, now has over 14 million coders and 35 million repositories of open source code. Github is already the largest bottom-up built code repository in the world. But open, mass-collaboration platforms like Github are still in their infancy. Today’s programming is quite technical, and the code being manipulated has only low-level capabilities. Imagine what they’ll be like when our deep learning-based natural language understanding systems become the front end to development environments that let people code in more natural ways.

In a vision of the future I call conversational coding, programmers will be able to manipulate neural networks as objects, change their architectures, and try them on different data sets, and physical environments, using natural language, gestures, and visual environments. (Typing will decline but never go away among performance coders, as long as parts of our brain are specialized to use fingers). There won’t be just a few tens of millions of coders in that world, there’ll be hundreds of millions. Even billions, because every human who speaks back to their own agent will, in that world, be an entry level conversational coder.

Alex Repenning’s vision of a conversational programming envir. (Agentsheets.com)

In a deep learning future, whoever has the largest repository of neural network variations, open data sets, and the largest tester and trainer community, will increasingly build the most useful and trustable variants of naturally intelligent software, including Web 3.0’s next operating system and tools. Who will that be? Individual corporations, or the crowd? The massive parallelism of the web, and the growing ease of conversational coding, argues that open and crowd tools will increasingly be our preferred way to create the best naturally intelligent software. There’s a saying in open source: “with enough eyes, all bugs are shallow.”

Besides being open and massively bottom-up, this coding approach will have to be much more modular. Modularity is yet another thing that biology does very well. Neuropsychologists believe human brains have at least 500 discrete cognitive modules, specialized subsystems used to resolve cognitive tasks.

Many software platforms are taking a big step forward in modularity right now. Open source platforms like Docker, which my developer and futurist friend Bino Gopal says are future of large application architecture, split large applications into thousands of software containers, many of which deliver microservices, modular software processes that do small tasks for the user, communicating with each other using language-independent interfaces.

Containers allow programmers to push individual updates to any microservice or its OS “instance”, in minutes, behind the scenes, without risk of breaking the application. This design increases the resiliency, parallelism, and virtualization of applications, an obvious advance in natural computing. This application virtualization follows operating system virtualization, pioneered in the 2000’s using open source software like Linux, by companies like VMware. It is possible then that developing software containers for neural network modules may be one of the next steps forward in natural computing. The seeders, growers, and trainers of tomorrow’s massive neural nets will certainly need the ability to update microservices within each application without breaking the architecture. It will need to fail or improve gracefully, just like the human brain.

This short survey should help the reader see that many fields are driving this great phase transition (aka “singularity” :) to natural machine intelligence. You can contribute to this epic event in many ways, and the first is just to be aware of it and to share that awareness with others.

If you want to do more, for any curious and analytical mind, young or old, communities and fields like IT, data science, computer science, biotech and bioscience, nanotech and nanoscience, deep learning and neuroscience offer great places to make an outsized contribution to our naturally intelligent future. Have fun and change the world!

Our Sims Manage Our Biases, and Make Us Perpetual Learners

To get back to where we are today, we know there will be many valleys and swamps to cross before most of us view ourselves as part-agent, part-biology. We’ll continue to experience social prejudice and conflict from biased, inflexible, extremist biological brains in human society, for decades to come. So besides improving our sims, we’ve got to keep empowering human beings, growing their empathy, and moving them from ideology to evidence-based thinking. But now that deep learning is on the scene, I believe we’ll make increasingly more progress decreasing human prejudice and bias by improving our sims, even more than our brains. Both strategies are important, but the first is far more exponential, for deep universal reasons.

As we come to see our agents as a natural part of us, we’ll re-understand ourselves as lifelong learners, as perpetual children, as experimenters, and as investigators. Our accelerating personal learning abilities via our agents will make us much less inflexible, dogmatic and judgmental of others. When it isn’t so hard to change our views, via our agents views, and when our agents know and have mapped our cognitive and social biases, and are helping us to manage them, every position will become more lightly held, able to be improved by the latest theories and data. At least in our sim’s mind.

Spock mind melding with a computer, Nomad, in The Changeling, Star Trek: TOS, S02E03 (1967)

Our Mind Meld Future: 
How Millions of Folks Alive Today Will Be Postbiological Tomorrow

Given their relative slowness, their fixed potential to change, and their imperfect error correction at the level of molecular biology, which means they all must age and die, even as we grow wiser with experience, I’m convinced we’ll eventually grow out of our biological brains, and leave them behind. Our sims are the future of human minds, and our brains are increasingly our past. They’ve been a great advance, but we’ll soon leave them for something just as natural, but even better — postbiological life.

What will that transition look like, on an individual level? Here’s my best predictive scenario for that future. It is a future where we merge with our sims, falling gently down a “slippery slope” into a postbiological form of life. A prediction we can call the “mind meld future” or the “slippery singularity” for short. It’s slippery because it will happen gently, by degrees, but each stage will be so slick we won’t be able to resist it. One day we’ll just wake up and realize we’re a different kind of species.

Minsky’s famous (and not very organized) book (1988)

Understanding our mind meld future begins by realizing that every one of us are already what Marvin Minsky called a “Society of Mind.” This means we have many independent mindsets inside our own brain, each a distinct yet overlapping set of neural networks, that stores its own redundant data (as well as accessing common data). This diversity allows us to see the world from many simultaneous viewpoints, and to argue with ourselves over every important decision we must make. Our minds, it turns out are very much like beehives, with each mindset being like an individual bee, doing a constant waggle dance with its thoughts and internal dialog, trying to convince the whole hive to do something, for our best interest. We maintain this redundancy and diversity of viewpoints in our brains because it makes us more adaptive, but sometimes it malfunctions, as we see when our minds split into multiple personalities during trauma. Most of the time, though, the quality and quantity of information shared between each mindset is so high, that it is most useful for us to think of ourselves as one person. Even though we are, in actuality, also a society of mindsets.

Our minds are like beehives, with each unique mindset (semi-redundant neural net) acting like a bee in swarm cognition.

You can probably see where this is going. At some point later this century, let’s guess circa 2060 or so, your sim will ask you, or your children if you are no longer around, if you would like a direct “mind link”, or BCI (brain computer interface), to its own neural networks, via the use of removable nanobots (transducers) in your brain that allow you to wirelessly and continously connect your two natural intelligences. This mind link will allow you not only continue to talk to and argue with yourself within your own biological mind, but you’ll now be able to use the same high-bandwidth neural signaling language to talk to and argue with the mindsets in your sim as well. Once this mind link is sharing sufficiently high quality, high-quantity, and high-speed neural information, it will be more useful for you not to think of yourselves as two minds, but one.

Direct mind links without the use of neural technology were first popularized by parapsychology researchers, as telepathy (often claimed, never found). Real BCI research began in the early 1970s at UCLA, and it happens in hundreds of labs today. Mind links using nanotech have been done well in sci-fi since the 1960s. An on-screen depiction of a mind meld with a computer happened for the first time ever (to my knowledge) in Star Trek: The Original Series, when Spock did a Vulcan mind meld with a computer called Nomad, whence I get the name for this scenario. See Nexus (2012), by futurist Ramez Naam for a neat recent sci-fi story featuring mind meld nanotech.

The nanobots your sim offers you will be able to do neural synchronization, which is my favorite evidence-based model for how consciousness arises, why we don’t have it in sleep or under anesthesia, and why our consciousness rises and falls so much in intensity throughout the day. Neural synchronization, and the fact that your sim is built out of neural networks, which think, feel, and communicate in the same way your biological brain does, will make your sim and biological brain both feel not only like they are one organism, but that they are also one shared consciousness. You will slowly but inevitably realize you’ve grown into new, larger “you” that is still capable of arguments and disagreements, a you that can move its point of view between your biological and your electronic self.

Great technical work on consciousness. Buzsaki (2011)

If you have an interest in graduate biology and want more on neural synchronization and the mechanisms that do the synchronizing, see Buzsaki’s Rhythms of the Brain (2011) and read up on ephaptic coupling. As with memory encoding, neuroscientists don’t yet have all the details, but consciousness is no longer a mystery we expect we’ll never understand. It is a fully physical process, a puzzle that we’ve already partially solved. Tomorrow’s computational neuroscientists will surely crack it’s remaining details, and we’ll duplicate it fully in our neural network-based machines.

We’ll also solve making neural nets that not only think and remember, but feel. Every one of our different feelings (anger, happiness, sadness, envy, etc.) are, at their core, different strings of positive and negative sentiments that we have associated with past actions, like notes in a song we hear whenever we think about various subjects or possible future actions. We build these “feeling songs” using neural networks specialized for sentiment — the amygdala, limbic system, and parts of our prefrontal cortex — the latter being what doctors cut through in their surgical lobotomies during the 1930s-1960s, creating much more passive and emotionless people. These sentiment networks give us “gut feelings” about what to do next. We need those feelings when rationality fails us, as it often does. Patients with lesions in these emotional networks can’t feel, but still can still access sentiment memories in their prefrontal cortex. As neuroscientist Antonio Damasio describes in Descartes’ Error (2005), some of these patients can rationally argue forever the merits and drawbacks of various actions, but they can’t make decisions, and are as unmotivated as a lobotomy patient.

This is a clue that if you want to stay motivated in life, let yourself consciously feel both the highs and lows of your day, and observe closely how those feelings relate to your thoughts, and vice versa. You may need to think more at times, as with unconscious bias or anger. You may also need to feel more at times, as with procrastination due to unconscious fears. When you consciously feel and acknowledge your emotion, whatever it is, and think about how your thoughts triggered it and whether it was useful, you can get on with making changes to both your thoughts and feelings that will give you real progress. Both are sets of neural networks, trainable by your mind.

So there are excellent arguments that naturally intelligent machines will have to have sentiment networks (gut feelings) as they get smarter. As no finite physical mind can ever be “Godlike” in its intelligence, and no being ever has perfect information, their rationality will regularly fail them, just as it does with us. When logic fails and evidence is weak, gut feelings, and moral sentiments, will always be necessary to inform both us and our naturally intelligent machines, and motivate us to make better decisions. For students, let me recommend The Oxford Handbook of Affective Computing (2014) if for more on the work being done today to bring sentiment to computers.

It’s likely that you’ll notice some differences in the way your biological and electronic minds think and feel. At first, you’ll very likely still feel most at home in your biology. But your sim, your electronic self, will be learning, thinking, and feeling, millions of times faster than your biology does. So it will eventually also feel like home. Consider that one of the first things your new “hybrid” you will want to do is scan and back up all your biological brain’s memories into your electronic brain, as your biology will continue to age and die.

As you perform this memory backup, you’ll be amazed to realize that you can recall your life’s memories, and think with them, millions of times faster in your electronic mind than your biological mind. Increasingly, as your electronic mind improves, you’ll even feel your center of consciousness increasingly moving out of your biology and into your sim. Again, your electronic mind may feel in several ways more primitive than your biological mind in the early years of this technology. You’ll know you haven’t captured everything yet from your biology, and you’ll continue to upgrade your sim every year. But as your sim encodes ever more algorithms of your biology and invents new ones biology doesn’t have, your personal sim will increasingly becomes the place where “you” spend most of your mental life.

Here’s perhaps the biggest shocker: When your biological body dies after years of such a mind meld, it won’t feel like death, from your perspective. It will feel instead like metamorphosis, or natural change of form, like going from childhood into puberty, or a caterpillar into a butterfly. You’ve just experienced the slippery singularity. Using the mind meld, you slid right into your postbiological form, and you did it in the most natural way imaginable, without interruption of your personal consciousness or feelings— in fact, with an enlargement of them.

Welcome to your new you. You now feel like a perpetual — and effectively immortal — child, constantly able to grow and learn. You also have a potentially perpetual lifespan, assuming you want to stick around and keep improving. This outcome, where biology begets postbiological life, may be a universal developmental process that arises on every Earthlike planet in our universe. We shall see, as they say.

Because I think human aging and death will be increasingly unacceptable to twenty-first century citizens, and because biological science promises to give each of us only a few more decades of healthy lifespan, at best, due to unfixable and imperfect error correction in our molecular biology, I expect that hundreds of millions of us, and perhaps even billions of us will choose personally experience this particular metamorphosis when it becomes available, perhaps in the late 21st century or soon after.

This is definitely the most amazing thing I’ve come to realize as a futurist. Not quite as amazing, but still surprisingly good news, is that even now, folks who want to can preserve themselves and their loved ones at death, for reanimation or uploading in a future where postbiological life runs the show. What’s more these preservation technologies are going to get a lot less expensive and more validated later this decade, if present trends continue.

As futurists, I believe it is our moral responsibility to tell this inspiring story often and well, to keep checking it against the emerging science, and to build our biology-inspired machines, and these preservation technologies, as well as quick, and as accessibly as we can. The choices we make with them today determine how many humans will benefit from them tomorrow. Our postbiological destination may be inevitable, but the quality of the path we walk toward it is entirely in our hands.

We’re on the edge of an amazing world, and there’s never been a better time to be intelligent, future-oriented optimists. Thank you for reading!

Our Next Post

Our next post consider the question of safety and morality in our agents as their intelligence grows. We’ll again look at biology to understand how life maintains safety and trustability in social collectives, and understand that something we can call natural security will be the future of physical and cyber security in tomorrow’s intelligent machines.

Calls to Action

● Consider putting some of your speculative investment savings into a company using or improving deep learning. My top pick at present is NVIDIA (NVDA). They are trading at 45 with a P/E of 39. They have gained 225% over the last 18 months, and they may drop a bit soon due to profit-taking, as they’ve just recently run up a 125% increase. Nevertheless I predict they will gain at least 80% or more in value yet again over the next 12–36 mos. The impact of deep learning is still greatly undervalued in the business world, and NVIDIA is a solid company in the right place and time to be a Levi Strauss & Co. to the coming Gold Rush.

● Consider funding an individual’s deep learning, computer science or neuroscience training or research on GoFundMe or a similar site, and investing for equity in a deep learning startup on an equity crowdfunding site like StartEngine and Crowdfunder (available to any of us) or one of the (presently) 122 deep learning startups on AngelList (for accredited investors only, unless you join a syndicate).

● If you’d like a reminder for the next post in this series, enter your email to get our brief biweekly newsletter, Accelerating Times. Feedback? Reach me at john@foresightU.com.

John Smart is CEO of Foresight University and author of The Foresight Guide. He studied physics, chemistry, molecular biology, medicine, computer science and business before getting his MS in Foresight from U. Houston. He blogs on foresight development at Ever Smarter World. You can support his writing on Patreon, and follow him on Medium, Twitter, Facebook, YouTube, or Reddit, at /r/ForesightU.

Think others might like this article? If so, click the green ❤, thanks!

CC 4.0. Anyone may share or adapt, but always with link and attribution.