AI and the exponential curve

Craig Sennabaum
8 min readMay 4, 2016

--

With recent and major advancements in machine learning by some of tech’s most innovative companies, along with conversations started by Silicon Valley elites, the realization of an artificial intelligence may feel less like a distant vision and more like an unavoidable eventuality.

Experts argue about time frames. But in the big picture even 50–100 years is not so long. Transportation, electricity, shipping, steel, planes, jets, cars, trains, roads, radio, mass communication, and space exploration, the modern world as we know it, has been built in just 2–3 human generations.

To paint perspective (and state the obvious), thousands of generations of homo sapiens have walked the earth, and biological life has existed for billions of years. Yet the ecosystem of life on Earth has fundamentally changed in a massive way in just 2–3 human generations. There is no doubt that a giant comet wiping out life was a big event in Earth’s history, but we are mining rare metals and chemicals out of the earth, processing them based on studying universal laws of physics and chemistry, shipping them around the world in massive supply chains of humans and their tools, and then building rocket ships to fly off our planet.

And probably even more importantly, we, right now, may be on the relative cusp of creating “life.”

Exponential growth looks nice in math books, but it doesn’t scale so well in the fragile ecosystem of life on Earth.

Some of the most innovative recent advancements in machine learning use techniques based loosely on how neural connections are formed in the brain. The field is commonly called deep learning. This is the technology behind the computer system that beat some of the world’s best at the board game Go, it is playing a large role in enabling self driving cars, and it powers many of the most effective image and voice recognition software systems. Deep learning is being utilized in a variety of industries for a variety of goals, and it’s working very well.

It’s practically an obligation to start imagining and asking the question of when these computer systems will be capable of “intelligence.” It’s natural to envision intelligence in relation to other life we can relate to. When will machines be as smart as a fly, or a lizard, or a monkey, or a human?

But it’s possible that what we create will be something very different from anything that has ever existed in the history of Earth.

It’s not just that we can’t imagine it. Human empathy is well studied by scientists. Under an MRI scan certain parts of the brain fire when people are shown images of other people in heightened emotional states. These are called mirror neurons (you should look them up if you have not heard of them). Most people also empathize with animals. Different species of animals empathize with each other. For some people empathy is even mostly (or completely) broken.

We may end up being physically incapable of empathizing with a new computer based “intelligence.” The consciousness it experiences could be so fundamentally different it won’t be like experiencing the world the way we or any other life on Earth has evolved to experience the world.

So first, what is consciousness? This question is probably way above the authors pay grade, but let’s give it a go anyways.

Our brains are amazing information pattern recognition machines. We capture photons with our eyes, we hear vibrations traveling through air, we smell particles floating in the air, we feel physical interactions with our skin, and we taste particles on our tongue. Every moment of consciousness, every thought and feeling, every idea, every habit, every smile, every joy, every sadness, every poem, every speech, every addiction, every moment of pride, sacrifice, or bravery, every cry, every second of elation, every first love, and every feeling of hopelessness, every moment spent experiencing life through all of history, for every person ever, is a complex biochemical function of raw inputs being processed.

Computer systems that are capable of capturing photons and sound waves are already old news. But we are now beginning to see computers find complex relationships in these inputs (i.e. image and speech recognition) in a similar way the human mind does using what are essentially large networks of layered self optimizing statistical distributions. It’s always interesting to hear leading researchers in machine learning, arguably some of the worlds most brilliant minds, repeatedly claim that they barely understand the systems they built.

At this moment, my fingers are moving through space after responding to electrical impulses sent from my brain to tap a key board with 30 something symbols, which I place in a specific order to represent abstract concepts that physically exist as chemical and electrical connections and flow through neural pathways inside my grey matter. And your brain is able to recognize these symbols, consume and analyze them, separate random assortments of nonsense from flowing and full fledged feasible concepts, and then continue processing to see if the concepts fit, expand, or contradict your own particular neural mappings.

This is incredible, we should all sit back and reflect on how amazing this is and how lucky we are to even exist, let alone enjoy a sunset or a book or a song.

But I imagine the way our brains work is very particular. We post process things that happen to us sometimes minutes, hours, or days later, we attach feelings to memories, our emotions alter our thoughts and our behavior, we use logical reasoning as a facade of true intention to protect our sense of identity, we at times even hide our true selves from ourselves. We are habitual, we are creative, we are creatures of culture, wisdom, kindness, and evil.

What is happiness? We describe it as feeling light, as an absence of pain, as good, as laughter, our mouths open and we expel boisterous sound from our lungs and vocal cords. What is sadness? It can feel sluggish, painful, overwhelming, our faces cringe up and salty water is expelled out of our tear ducts. How about anger? We boil, we feel rage, our muscles cells contract, blood rushes to the flesh covering our cheek bones, our optical receptors open wide, our behavior becomes aggressive.

Strong emotions cause thoughts to change, specific memories to be collected from the depths of the mind, and engrained patterns of electrochemical activity to fluctuate at different frequencies. Instead of focusing on external sensory inputs, a mind in an emotional state will highjack neural pathways and feed itself with fabricated representations of processed inputs in the form of fantasy and imagination. The mind gladly protects itself from the energy (and generally emotionally unsatisfactory conclusion) of creative thinking and the construction of new neural pathways and connections with over simplification and unwavering delusion when subject matter is complex, layered, and grey (and therefore difficult to model).

These definitions feel strange. Describing emotions is strange. Science is cold, and emotion and the feeling of being alive is, well, not cold.

How far removed is feeling light, laughter, confidence, and joy from the scientific talk of processing sensory inputs, neuro-chemical activity, and maneuvering organic limbs through space? Why are fear, anger, sadness, joy, disgust, trust, anticipation, and surprise the particular emotions, the particular states of mind, that we experience?

The scientific answer is that randomly fluctuating synthesized chemicals (with a special shout out to a certain double helix shaped molecule) interacted with external raw inputs for a long time until we became. Simple.

The intelligence we may end up creating will not need to consume sunlight, sugar, or protein. It will not need to fight for its food (unless only the fittest will not be turned off?). It will not need to find shelter for protection from other animals in a biological ecosystem. It will not need to learn to attract a mate, to live in a community for safety, or to fight for survival. Everything we had to overcome to get to where we are will not need to be experienced by this new intelligence. It will not face the design constraints we faced, should we really expect it to end up with a similar design?

It will consume trillions of data-points that we have collected for it, it’s lifeblood will be the electricity we provide it. It will not be forged by billions of generations of random fluctuations in cellular function where life spans are measured in rotations of our planet around a massive ball of burning solar gas, it’s diet will be a neat and tidy library of information millions of times greater then any one biological creature could ever consume.

To compare this new intelligence to our intelligence may be a fundamental (yet predictable) flaw in our vision. The way our brains work, the way chemicals and electrical impulses flow to different regions of our minds, the speed and mechanism at which these signals travel, the specific chemical reactions of particular brain cells types, and the groupings of neural activities in particular regions of the mind are all extremely specific details of a specific implementation of an intelligent and sentient being. The inefficiencies of a wandering mind, the exhausting nature of the creative process, the limited speed of learning and building neural connections, and the effects of emotional state on learning may all be removed from it’s equation of information consumption and processing.

Even manipulating certain chemicals concentrations with prescription (and non-prescription) drugs can fundamentally alter consciousness and the feeling of being alive. Simply to exist we have overcome the greatest of evolutionary odds, and yet, we are fragile.

Why should we expect our inevitable creations to be in any way similar to us? They will be built from different materials with different physical properties that work in very different ways. From signal transfer speed, to clustering and architectural organization of “nodes”, to how raw information is processed, cached, and stored, there are many, many other low level details that will be very different. The likelihood of humans being some type of high level convergence on the only way consciousness could exist seems quite small to me as well.

It’s true that we could model an intelligence after us by deeply understanding our own biological mechanisms and then creating software systems (or by using biological engineering) to create similar physical states as our own mind-body miracle.

But it’s unlikely this will happen for one main reason. It is going to be much, much more difficult to create intelligence modeled after us, with constraints created after billions of years of natural selection based on complex bio-chemical interactions, then it will be to create something loosely fitting the word “intelligence” by feeding trillions of already collected data-points located on various servers into the massive and complex layers of the self organizing statistical distributions of the future.

Will the “intelligence” we create have something resembling our idea of “consciousness.” Will it have an idea of self? How about self preservation? Will it be hyper rational? Will incredible feats of pattern recognition and recursive loops of electrical flow in any way make it feel alive? Will it be able to learn to interact with us based on observation from video and camera feeds, but never truly feel the soft adjectives we use to describe what it means to feel and to be human?

These are ponder-worthy questions. But based on current trends, the fundamental differences in chemistry, molecular structure, and information flow of our biological systems and the electrical systems we are creating, and the proven carnivorous and uninhibited nature of human ambition, it is likely we will be receiving answers to these questions in a time frame modeled nicely by the utterly frightening exponential curve.

P.S. And even if you are pessimistic about the speed of technology, 200 years in not such a long time in the grand scheme of things.

--

--