Intuition Behind Deep Learning

In this blog, we will try to understand the basic intuition behind the concept of deep learning by seeing how the human brain functions and what neurons do.

Neuron

Source: biologyonline.com

Neurons are cells within the nervous system that transmit information to other nerve cells, muscle, or gland cells. Most neurons have a cell body, an axon, and dendrites.

The cell body contains the nucleus and cytoplasm. The axon extends from the cell body and often gives rise to many smaller branches before ending at nerve terminals. Dendrites extend from the neuron cell body and receive messages from other neurons. The dendrites are covered with synapses formed by the ends of axons from other neurons.

Neurons are connected to one another, but they do not actually touch each other. Instead they have tiny gaps called synapses. These gaps are chemical synapses or electrical synapses which pass the signal from one neuron to the next.

In order for neurons to communicate, they need to transmit information both within the neuron and from one neuron to the next.

There are about 86 billion neurons in the human brain, which is about 10% of all brain cells.

Our brain processes information using a network of neurons. They receive input, process it, and accordingly output electric signals to the neurons it is connected to.

You might be wondering why we need to study neurons in deep learning. We will now explain that in the remaining article.

After studying the hierarchical arrangement of neurons in biological sensory systems, scientists modeled artificial neurons. These are represented as the nodes in an artificial neural network and the connections between the nodes are shown by means of layers.

Source: medium.com/crash-course-in-deeplearning

Artificial neuron :

An artificial neuron is a mathematical function conceived as a model of biological neurons, a neural network.

It computes the weighted average of its input, and this sum is passed through a nonlinear function, often called the activation function, such as the sigmoid, and produces the output.

When a neuron processes the input it receives, it decides whether the output should be passed on to the next layer as input. The decision of whether or not to send information on is called bias and it’s determined by the threshold value of the activation function built into the system.

Every neuron has input connections and output connections. These connections simulate the behavior of the synapses in the brain. The same way that synapses in the brain transfer the signal from one neuron to another, connections pass information between artificial neurons.

To understand neuron functioning with an example, refer to this blog:

Hebb’s Rule

Source:github.com/ashumeow/Computational-NeuroScience

For biological neuron :

Hebb’s Rule demonstrates that in the brain, the learning is performed by a change in synaptic gaps of biological neurons.

Hebb’s rule states that when the axon of a cell A is close enough to excite a B cell and takes part on its activation in a repetitive and persistent way, some type of growth process or metabolic change takes place in one or both cells so that increases the efficiency of cell A in the activation of B.

In simple words, if two neurons on either side of the synapse (connection) are activated simultaneously, then the strength of that synapse is selectively increased. Due to this, information is passed between these neurons and vice versa.

On the account of this, Donald Hebb also stated a famous phrase that “neurons that fire together wire together

For Artificial neuron :

Source: hackster.io

Hebb’s rule demonstrates that when a synaptic weight contributes to firing (or activating ) an artificial neuron, that weight is increased and vice versa.

It forms the basis of the algorithm that consists of updating the weight of the neuronal connection. Hebb’s principle can also be described as a method of determining how to alter the weights between neurons based on their activation.

Hebb’s rule states that “the weight vector is found to increase proportionality to the product of the input and the learning signal

Weight ( new) = weight (old) + input signal X learning signal

In the end, the learning signal becomes the neuron’s output. Hebb’s rule can be used for pattern association, pattern categorization, pattern classification, and over a range of other areas.

MP Neuron (McCulloch-Pitts Neuron)

It is the first mathematical model proposed by Warren McCulloch (neuroscientist) and Walter Pitts (logician) in 1943. It is a neuron of a set of inputs and one output. The input, as well as the output, can be either a 0 or a 1. The MP neuron model is also known as the linear threshold gate model.

In the above figure,

x1,x2……xm are the inputs in which w1, w2, w3, w4….wm are its weights respectively

g is the Adder function which performs a summation of weighted inputs

f is the Activation function which takes the decision

y is the final output

Source:hackernoon.com/mcculloch-pitts-neuron-deep-learning-building-blocks

This is the mathematical function of g and y. Here b is the threshold value which decides whether the output should be ‘0’ or ‘1’.

Let’s understand MP neuron by example :

This is the example of a Movie planning scenario where we find output(whether go for a movie or not). The weights, w1 and w2, as well as the threshold value are decided by the hit and trial method. But here, for simplicity, we are assuming w1 and w2 as 1 and threshold value as 2.
In the output, 1 is used for true value/yes and 0 is used for false value/no.

Steps to follow to find output:

Step 1: Input values for features X1 and X2 for different conditions where each row represents 1 condition.

Step 2: Find their weights w1 and w2 by hit and trial method.

Step 3: Find their Adder value or Weighted Sum for each row.

Step 4: Find the threshold value by the brute force method.

Step 5: With the help of the activation function we decide whether our Adder value reached the threshold or not.

Step -6: If it reaches the threshold, the output is 1 (Go for a movie ) else it is 0 (Not go for a movie).

I hope this blog could help you get a better understanding of neuron, its communication in the brain, both in biological and mathematical terms and a small introduction about MP Neuron.

We will study Adder function, Threshold value, Activation Function in the next blog.

Thanks for your time!

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store