Introduction to Neural Networks

@IanChriste
Nov 1 · 3 min read

I. What are Neural Networks?

Neural networks are very much like their name — network of neurons — used to process information. In Data Science — these artificial neural networks are used process information resulting in accurate results.

It is the ability of neural networks to learn complex mappings of input data to useful attribute representations that has made these models so accurate.


In Neural networks there are input layers, hidden layers and output layers. Input layer is the layer that receives the data and inputs the data/information into the neural networks. The hidden layers (between input and output)receive data from the previous nodes. Finally, it’s the output layer that brings together all the data to output the information.

II. Training a Neural Network:

The weight update rule necessitates an estimate of the error for each neuron — this it not an issue for the output layers of the neuron — but it’s difficult to calculate the error for the inner hidden layers.

This is solved via back-propagation to calculate error for neurons and then uses the weight update rule to modify the weights in the network.

Back-propagation is the way of propagating the total loss back into the neural network to see how much error each node is responsible for. With that information — updating the weighs to reduce error — nodes with lower error get higher weights.


In Back-propagation — the weight updates do reduce error but do not eliminate it completely so as to not overfit the training data. (Back propagation is a supervised learning technique.)

III. Steps in Back-propagation:

Step 1. Calculate the error for nuerons at the outer layer

Step2. Share the error with the preceding hidden layer neurons — calculating the overall error each neuron is responsible for and updating weights accordingly.

Step 3: Repeat until all layer weighting has been updated.


IV. Neural Networks Types: RNN and CNN

Recurrent Neural Networks—can be thought of as multiple copies of the same network — where each one passes along a message to the next. RNN introduces the idea of loops in the network, and is very good with processing sequential data like language.

Convoluted Neural Networks — is designed mainly for image data classification —where there are groups of nuerons that share weights learn to identify features. CNN take the input image and processes it then classifies it under a certain category. In CNN the neuron work as a group, with each one examining a different location in the image so as a group the entire image is covered.

V. Deep Learning Past and Present:

In the past there were three main issues holding back Deep Learning.

(1) Back-propagation- error was difficult to attribute because the error would get shared out so through many layers that the error estimates weren’t useful by the time it reached the input layers.

(2) Deep Learning requires a great deal of computing power and works best with a great deal of training data.

(3) Deep Learning requires a great deal of computing power.

With these issues now out of the way — Deep Learning — has been able to thrive and perform deep and accurate analysis.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade