Divine Power of Neural Networks: Part 1

Flattening neural networks

Sandeep Jain
4 min readFeb 3, 2018

Background — This is one of a series of posts to simplify neural networks for the lay person.

The universe consists of systems. A neural network can figure out and mimic how most systems work, without taking the system apart.

From the days of basic algebra way back before high school, you will recall that a function takes some input, and produces an output.

y = f(x) = ax + b

x is input and y is output. f(x) is the operation on x to get y. Right?

Universe comprises systems governed by functions

The way any system works in the universe can often be described by a function. Your eyes reading these words, the light bouncing between them, a planet or an ant — these are all systems with inputs and output, and have complex physical functions.

The Universal Approximation Theorem states that a neural network can approximate any such function to predict the output. That’s why computers (phones, cloud services etc) are beginning to recognize images as well as humans, without any knowledge of the human anatomy, vision, or brain function.

Source: IMAGE: CORBIS RYAN ETTER/IKON IMAGES

Powerful, right? I mean if it is universally applicable to all systems with governing functions, then that is divine power, without too much exaggeration.

Making toast

So, is that like being able to take apart a toaster, and then, put it back together? Like, reverse engineering?

No. It’s like a machine sees electricity and slice of bread going in, and heat and toast coming out, and figures out how to make toast. Just a mathematical version of it…not quite like the movie Transformers, but a virtual version of that.

Note that neural networks are specialized to discover patterns for specific tasks, and cannot transform to a different task at will. Not yet at least.

So, it’s like teaching a child how to catch a ball?

Charlie Brown playing catch

Yes. You wouldn’t teach by writing a long step-by-step procedure. You’d show it. The child would learn by example, and by making mistakes and correcting for them. Our brain synthesizes experience to optimize loss reduction. A child experiences all the throws, the misses, and the catches, and improves, and eventually plateaus. That’s how neural networks learn too.

Does that mean neural networks can figure out whether a set of decisions will result in peace on earth and happiness for all humankind?

If there is a function governing those systems, and you can determine past inputs paired with those outcomes, then theoretically, YES. Practically, we would also need a massive amounts of computing resources, which may not be feasible.

Can neural networks tell us why a certain set of input conditions lead to a prediction, like say, conflict/war?

No. The inner workings of neural networks is hidden from all but the very best data scientists. While research continues to rationalize neural network predictions, this is actually a hard challenge.

Oracles of AI

Oracle of Delphi (Wikipedia)

In classical antiquity, an oracle was a person or agency considered to provide wise and insightful counsel or prophetic predictions or precognition of the future, inspired by the gods. As such it is a form of divination.

-Wikipedia

Since neural networks are likely to be right 98% of the time for complex situations like self-driving and many more tasks, you may come to rely on them in the coming future. Suppose that on occasion, the predictions they spit out don’t make intuitive sense. Maybe, you are sure of a driving route being better. It could be a more personal prediction like the class or school your child is placed into.

Then, it is natural for you to want to know why. To do this, you would have to go to a data scientist, who would behave like the Oracle of Delphi, and would go back to what the machine ‘might’ have learnt that ‘might’ have led to that prediction. In essence, they would be interpreting the word of divinity.

--

--