Introduction to Neural Networks
Neural network basics
What are neural networks
Neural networks, commonly known as Artificial Neural Networks (ANN) are quite a simulation of human brain functionality in machine learning (ML) problems. ANNs shall be noted not as a solution for all the problems that arise, but would provide better results with many other techniques altogether for various ML tasks. Most common use of ANNs are clustering and classification, which can be used for regression tasks as well, but there are better methods when it comes to that.
Building blocks and functionality of ANNs
This is the building unit of the neural networks, which imitates the functionality of a human neuron. Typical neural networks uses the sigmoid function which is demonstrated below. This function is used mostly due to its nature of being able to write the derivative in terms of f(x) itself, which comes handy when minimizing error.
z = ∑ w×x
y = sigmoid(z)
w = weights
x = inputs
Neurons are connected in layers so that one layer can communicate with other layers forming a neural network. Inner layers other than the input and out put layers are called hidden layers. Outputs of one layer are fed to the inputs of another layer.
Learning for an ANN is the task of adjusting weights to minimize the error. This is performed by back propagation of error. For a simple neuron using Sigmoid function as the activation function, the error can be demonstrated as below. Lets consider a general case where weights are termed as vector W and inputs as vector X.
From the above equation we can generalize the weight adjustment and surprisingly you’d have noted this only requires the details of the adjacent neuron levels. Thus this is a robust mechanism for learning, which is called back propagation algorithm. Started from output node and back propagated updating the weights of previous neurons.
Let us write a simple application that will train using two images and apply a filter onto a given image. The following are the source and the target image for the training process.
I have used an ANN that uses back propagation in order to adjust errors. The intension of the training is to find a function f(red, green, blue, alpha) to match the target color transformation. The target image is made using several color adjustments of the source image. Let’s see the code.
$ npm install
$ npm start
Source image should be
input_image_train.jpg and target image name should be
output_image_train.jpg. The image file to apply filter should be
test.jpg and a new image file will be saved as
out.jpg. Following are some example images I have filtered using the trained model.
Cool right? Training takes few seconds, but filtering is instantaneous. You can save the model for future use if needed. Which would be very smart in a real world application.