# Neural Networks as universal function approximators

## How to intuitvely understand what Neural Networks are trying to do

When you first learn about Neural Networks you are bombarded with matrix multiplications, non-linearities, and back propagation. There are many great resources, where you can learn about this (very important) stuff. This is not one of them.

The question I want to answer is the following:

What are we solving when we are using Neural Networks?

Neural Networks are function approximators. But what is a function approximator?

We can model anything with an input and an output as a function. There are simple functions and there are very very complex functions. An example of a complex function would be: f(image) → Numerical value [0–9]

This function *f* takes as input a 28-pixel x 28-pixel image and returns the digit that this image represents. This digit is part of the MNIST dataset.

You can try to model this function “per hand”. You could say if the pixel in this position and the next position is white and the fourth pixel in the third row is black, this image represents the digit 9. This is incredibly complicated, but for decades computer vision scientist tried to solve these problems like this.

The problem is so hard because your input space is so large. In this case we are talking about a 784 (28 * 28) dimensional input space. Just imagine how big this space gets if you have full HD image.

## The ultimate function approximator

With the introduction of neural networks, we had a tool that could iteratively model a function better and better. You would have to define the input and the expected output and let the network find the best combination of matrix multiplications and non-linearities to model this function.

This opens up a brand new door to programming in general, where you just have to define your input, expected output and gather enough data to train such a function approximator. If you are interested in this concept you have to read Andrej Karpathy’s blog post on what he calls Software 2.0.

This is currently by far our best tool to solve problems in such a high dimensional input space, where the complexity of hand-engineered features simply explodes.

## Conclusion

I hope this article illustrated a new way of thinking about Neural Networks. I think the concept of software 2.0 is extremely interesting and promising. It could fundamentally change the way we think about a huge range of problems.

Thank you for reading and keep up the learning!

If you want more and stay up to date you can find me here: