Deep Learning for Dummies(1)
You hear artificial intelligence, deep learning and machine learning everywhere so you google these terms and find yourself lost in the vast number of articles presented to you. Ignoring the rising anxiety, you scourge through the articles finding yourself even more confused with all the technical jargon than when you first started.
Applications, examples, math formulas and code snippets aside, here is a straightforward article with exactly what you need to know to get started on running tensorFlow code samples.
Artificial Intelligence(AI) is the big set. Machine learning(ML) is a subset of AI. Deep learning and shallow learning is a subset of ML.
One of the reasons for the origin of AI was to automate the function of neurons in the human body. This way computers and machines can imitate nature’s creation, the human brain, and perform tasks as fast and with as much accuracy as the human brain functions. This is now done using what is called as artificial neurons.
Deep learning is the new standard for these learning algorithms. Some applications include self-driving cars, face and image recognition. The main premises of deep learning is neural networks(a computer system modeled on the human brain and nervous system).
The building block for neural networks is artificial neurons. Artificial neurons are arranged in a network of neurons(Neural Nets). A row of neurons is called a layer.
A Perceptron is a single artificial neuron model. Other models with many layers include neural networks such as the multilayer perceptron(MLP), convolutional neural networks(CNN) and recurrent neural networks(RNN).
Artificial neurons consist of input, output, and hidden layers.
- Input layer is the one which receives the input and is the first layer of the network.
- Output layer is the final layer of the network.
- Hidden layers perform tasks on the incoming data and pass on the output generated by them to the next layer.
Artificial neurons are units that have weighted input signals and produce an output signal using an activation function. An activation function is a simple mapping of summed weighted input to the output of the neuron such as tanh, sigmoid, ReLU and softmax functions. Neuron Weights are weights on the inputs that are very much like the coefficients used in a regression equation.
Therefore, an artificial neuron,
(1) takes some input data
(2) transforms this input data by calculating a weighted sum of the inputs
(3) applies activation function to this transformation to calculate the next state
Still, what is the point of all of this?
To completely achieve the functionality of a neuron our machine needs to learn by itself to predict future events and models. Learning is all about automating a task or according to Arthur Samuel,
“give computers the ability to learn without being explicitly programmed.”
We can achieve this by using a lot of data to train a classifier which in turn predicts future tasks by itself.
The process for this is –
(1) take some data, Training Data (dataset supplied for training purposes).
(2) train a model on that data
Training a model is a learning process where the model is exposed to new, unfamiliar data step by step. At each step, the model makes predictions and gets feedback about how accurate its generated predictions were. This feedback, which is provided in terms of an error according to some measure, is used to correct the errors made in prediction.
This is done by –
1) Forward propagation — input layer supplies the input to the hidden layers and then the output is generated.
2) Backpropagation — error flows back from the output layer through the hidden layers and the weights are updated. These weights are then updated so that the errors in the subsequent iterations is reduced.
(3) use the trained model to make predictions on new data.
Three very useful resources: