Deep neural nets and the purpose of life

Nitin Pande
7 min readSep 14, 2016

--

A few weeks ago, I was in the process of transitioning out from one project to another at work. This provided an awesome time window to read up on some of the long pending topics of interest. Machine learning topped that list. It is a field that has already permeated the technology world deeply but I had no understanding of what it is all about. Just a few weeks of surface level reading since then (and playing around with some of the tools) has left me pretty convinced that we are fast accelerating towards a general artificial intelligence. The advances in the field of deep learning along with the acceleration of humungous data from fast spreading IOT devices will ensure that this future is more near than far. But for purview of this essay, I’ll stick to talking about my hypotheses that everything around us is a deep neural net and the metaphysical implications that arise if we embrace this view point.

Transfer learning

As a machine learning(ML) newbie (rather wannabe), my first exposure to ML was pretty magical. Google’s new ML library called TensorFlow[1] allowed me to train a deep neural net (DNN) model that could very precisely label different types of room images in a house with just a few hours of training on my own dataset of images scraped from google and that too on my normal MacBook Pro. I was overjoyed by this new magical power and could not sleep that night in excitement :). After reading a bit more, and once the initial euphoria had subsided, the keyword that stayed on with me was this interesting concept of “transfer learning”. As google puts it — “Transfer learning is a technique where we start with a model that has been already trained on another problem. We will then be retraining only a few layers of the model on a new problem such that the resulting new model exactly works to solve the new problem at hand. Deep learning from scratch can take days, but transfer learning can be done in much short time.”

Deep learning derives its name from the algorithms that it uses for learning called the deep neural networks (DNNs). They are ‘deep’ because they have a lot of layers of neural nets in them. Each layer of neural nets (or a set of them) creates an output that the next layer(s) can build upon[3]. So in my case above, the google pre-trained image model already had lower layers that identified the simple image components like edges and corners, and the intermediate layers that could figure out shapes. It was only the final layer that I retrained which did the top level job of bringing it all together to give the final interpretation and label to the image.

Our mind as a deep neural nets

A few days after the above episode, I encountered Ray Kurzweil’s amazing talk on “The Accelerating Future” [2] where he talks about the human brains as DNN patterns recognisers and also that there are parts of our brains (the old brain) that are exactly same as that of lizards.

Combined with the concept of “transfer learning”, this presented DNNs as a great architecture that can explain the transfer of learning from generation to generation of the same species as well as transfer across species. (I agree that this might be kind of obvious given that the architecture of neural nets is inspired from the working of the brain itself. But the key concept to focus here is the deep part of the DNN.)

So our mind is a DNN(or series of them) which is based on the learning models of the previous species from whom we have evolved. We should also have in our minds the models that got genetically transferred from our own parents and other ancestors. Apart from the basic knowledge of the world and survival recipes, these models probably also contain our sense of right and wrong (morality) fine tuned over millions of years of experimentation.

Even learning, during a single lifetime, seems like a DNN model creation process. During the learning phase our mind is working hard to create the right model (by triggering different types of neural patterns) which consistently outputs the desired results like playing the guitar well. Once the DNN model/pattern is created, it is computationally faster to run the trained model in real life situation or to play the next new song on the guitar. This is the reason that the process of learning itself is difficult but once learnt, a skill is easy to execute.

Experience then could just be an increase in the depth of our various neural nets.

And creativity could be result of the concept of transfer learning. Creatives probably work on the ability to keep retraining the top layers of their DNNs while still using the using the base models trained in another domain. On the other hand experts in a particular field have trained their DNNs much deeper in order to solve a particular problem in extreme detail.

What makes deep neural nets special

I think a important feature of DNNs is that they can act both as a machine as well as storage. So when there is energy flowing through them, they are living machines, directing energy to create dynamic patterns that are optimising to get a desired output. And when energy stops flowing they become these static patterns which we call models (DNN models).

So are we DNNs?

One interesting concept is to think of seeds (of plants, animals, etc) as DNN models. So seeds are essentially trained models (learnt & stored DNN patterns) passed from generation to generation, wherein every generation tries to improve upon the previous model by giving it more training and optimising to thrive in the environment where it is put into.

Taking this thought a step further — if seeds are models then we probably are machines.

Model (seeds)+ Optimal environment + Energy = A DNN based Machine

So the plant that arises from the seed is a manifestation of the trained model. This physical manifestation is essentially a learning DNN machine which works on new experiments to learn from its current existence in order to improve the underlying model from which it got created. And it adds new neural layers to it, if it finds some new information that helps its next generation adapt better than what the current model allows for.

Delving deeper on the thought, it is possible that everything around us is actually a deep neural pattern in space and time. In fact the whole universe is probably a giant DNN machine :). These DNNs, though theoretically are built on top of each other (layer by layer), physically might reside inside each other as well. That corresponds to the fractal nature of things around us. We are DNNs inside DNNs inside DNNs. The only problem with this theory is that it is unclear as to what is the giant universal DNN optimising for. Because if we can lay our hands on that purpose, we will probably have a fair understanding of the purpose of everything within the universe.

Purpose of life

Now if we are DNN machines inside other DNN machines (like the earth), then what could be our purpose. I think our purpose probably is no more than the purpose of the electrical signal in our brain. The purpose is to flow, to flow in a direction which may or may not be part of the winning neural pattern (or model). Our purpose is no more than the purpose of the water particle that is part of the flowing water. We as a unit do not matter. What matters is the emergent behaviour (the neural pattern) out of the collective work of everyone who is at this layer of the cosmic deep neural net. The species, the social constructs, the cultures, the religions, the countries, the languages, the environments like the arid desert and the abundant rainforests are all essentially experimental dynamic patterns trying to together optimise themselves in the service of a emergent mega universal pattern which itself is trying to optimise for ‘something’. That something is still unclear to me. But our individual existence devoid of the mega machines we are in, is essentially meaningless.

Next steps

This is the first draft of this thought framework and it is a long way from being very coherent. But there are a few key elements that are strongly embedded in this thought:

  1. Deep neural nets provide a comprehensive framework to explain the mechanics of our existence
  2. Everything around us (including us) is an experiment in itself or part of another experiment. Experiments continue till the point when they reach an optimal solution to the problem they are optimising for. Once the experiment succeeds it gets stored as a model for the next generations to improve upon.
  3. Deep neural nets take time in arriving at a solution and most of the energy is used during this period of trial and error. Running the model itself is a much faster process since the data just needs to run through a set path, which even though could be structurally complex but is a single path, so is faster to navigate.
  4. Complexity results from one model building over another which then forms the basis of another model and so on. Complex models are a hierarchical collections of simpler models.
  5. Behaviours are emergent. The individual neurons/units work on certain acquired rule sets derived from their underlying models but are unaware of the overall purpose or result that they are contributing to.

Given the above few concepts I next intend to apply this framework to some metaphysical concepts like karma, life & death, morality and more to see if there is something interesting to found out there. More soon!

References:

[1] Tensorflow for poets https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/index.html?index=..%2F..%2Findex#0

[2]Ray Kurzweil talk — “The Accelerating Future” https://www.youtube.com/watch?v=DIIUNtUVDPI

[3]This is What Happens When Deep Learning Neural Networks Hallucinate — http://thenewstack.io/deep-learning-neural-networks-google-deep-dream/

--

--

Nitin Pande

Free Bird. Product management for a living (makemytrip, mygola, treebo, mentii, healofy). Designer at heart. Love building stuff from zero.