Deep Learning: Relationship between Artificial Intelligence, Machine Learning & Deep Learning

Vishnu Vijayan PV
3 min readAug 3, 2020

Within the machine learning fields, there is an area often referred to as brain-inspired computation. Human brain is one of the best ‘machines’ we know for learning and solving problems. The brain-inspired technique is indeed inspired by how our human brain works. It is believed that the main computational element of our brain is neuron. The complex connected network of neurons forms the basis of all the decisions made based on the various information gathered. This is exactly what Artificial Neural Network technique does.

Deep Learning comes in the area of Neural Networks.

Within the domain of neural networks, there is an area called Deep Learning (DL), in which neural networks have more than three layers, i.e. more than one hidden layer. These neural networks used in Deep learning are called Deep Neural Networks (DNNs). DL algorithms are similar to how nervous system structured where each neuron connected each other and passing information. DL models work in layers and a typical model at least have three layers and each layer accepts the information from previous and pass it on to the next one.

Deep learning models tend to perform well with amount of data whereas old machine learning models stops improving after a saturation point.

Relationship between Artificial Intelligence, Machine Learning & Deep Learning

In mathematical terms, all machine learning is AI, but not all AI is machine learning. Similarly, all deep learning is machine learning but not all machine learning is deep learning.

Artificial Intelligence is human intelligence exhibited by machines
Machine Learning is an approach to achieve Artificial Intelligence
Deep Learning is a technique for implementing Machine Learning

Activation Function

Activation functions are functions that decide, given the inputs into the node, what the node’s output should be, because it’s the activation function that decides the actual output, we often refer to the outputs of a layer as its “activations”.

One of the simplest activation functions is the Heaviside step function. This function returns a 0 if the linear combination is less than 0. It returns a 1 if the linear combination is positive or equal to zero.

The output unit returns the result of f(h), where h is the input to the output unit.

Weights

When input data comes into a neuron, it gets multiplied by a weight value that is assigned to this particular input. For example, a university is accepting students based on the their test scores and grades. Therefore, the neuron will have two inputs, tests for test scores and grades, so it has two associated weights that can be adjusted individually.

Use of Weights

These weights start out as random values, and as the neural network learns more about what kind of input data leads to a student being accepted into a university, the network adjusts the weights based on any errors in categorization that the previous weights resulted in. This is called training the neural network.

Remember we can associate weight as m(slope) in the original linear equation.

--

--