What are Neural Networks?

Dev Shah
students x students
6 min readNov 10, 2021

--

Imagine this. You’re in a foreign country and all the signs around you are in a language that you don’t understand. The most logical way out of this is to pull out your phone and google translate your surroundings. Google would translate it into a language you understand and you’re good to go. The only way this was possible is through the use of neural networks.

What are Neural Networks? 🧠

The very first neural network, created by Frank Rosenblatt

Neural networks are a subset of machine learning and are quite literally used to help machines learn, but let’s understand what neural networks are before jumping into the ML intersection. Neural networks are also known as artificial neural networks and the name was actually inspired by the way the human brain functions 🤯. Specifically, neural networks work similar to how neurons are able to send signals to one another. Neurons receive input signals and they leave the neuron as a single output. This can also be referred to as a biological neural network. This is quite similar to how artificial neural networks function, there’s an input and an output, but it isn’t as simple as that. Neural networks are able to recognize patterns within data; the data is numerical, which are contained as vectors. This allows for data that we see in the real world to be converted into something that the neural network can understand. The structure of the neural network is what allows this to work.

How Do Neural Networks Work?

Neural networks are made up of layers, specifically node layers. These layers are further broken down, there’s an input layer, an output layer and one or more hidden layers. The image below is an example of a neural network, all the nodes are connected to one another. Each individual node has its own weight and threshold. The weight is a numerical representation of the connection between each node; the higher the weight, the greater the impact it has on the next node. The threshold is key in allowing the most accurate data to be outputted from the neural network. If the output of a certain node is above or equal to a certain threshold, the information will be passed on to the next node. However, if the threshold is not met, the information won’t be passed on.

So wait..is that all, is it that simple?

It’s actually quite the opposite, there’s a bunch of computations that occur within each layer/node for the neural network to get a final output. When data is run through the neural network, it’s initially fed through the input layer. The channels connecting each layer have numerical values, they are known as weights. After the input layer is determined, the weights are assigned to the channels. The weights are assigned according to its importance to the variable that is being determined (higher weights play a more significant role to the output).

Here’s where the math comes into play; the input values that each neuron has is multiplied by the corresponding weight. The sum is then send as an input to the next layer and the bias is added onto this input. The bias is a constant that helps the model so it can be a best fit for the data. So the math equation would look a little like this:

Following this, the value that’s determined is run through a threshold function, more commonly known as the activation function. The activation function decides if the neuron gets activated or not. Activated neurons continues to transmits data to the next layer, this is better known as forward propagation. This continues through until the output layer is reached, at which point the neural network will output the final result.

What if the neural network gets the output wrong 🤔

Neural networks aren’t perfect, they require a lot of training and during the initial stages of training they will get a lot of predictions wrong. To do this, the neural network is actually fed the correct output value. It compares the correct output with what the neural network outputted. The magnitude of error is an indication of how far off the neural network was from the correct output. Similar to the image below 👇

It’ll look something like this

Following this step, this information is sent backwards through the network, this is known as back propogation. Based on the information the neural network relays back through the network, the weights are adjusted accordingly. This entire process of going through the neural network back and forth is performed multiple times to help train the network.

Types of Neural Networks

Neural networks can be further categorized into different classes; each class has it’s own purpose. Each class has it’s own set of principles, strengths and applications. There are many types of neural networks, but the three most common ones are Feedforward Neural Networks, Recurrent Neural Networks and Convolutional Neural Networks.

Feedforward Neural Networks

These neural networks are the most common type of neural network; they consist of an input, hidden and output layer(s). There’s no backward connection between the nodes/neurons, it’s just like it sounds, it only feeds forward. The data is moving in one direction, from the input layer to the output, there’s no feedback loop within this network (back propagation). They are often referred to as Multi-Layer Perceptrons (MLPs). They are made up of neurons to help work with data that is non-linear. Some applications of these neural networks include face recognition and computer vision.

Recurrent Neural Networks — Long Short Term Memory

Recurrent neural networks are the exact opposite of a feedforward neural network; the factor that differentiates RNNs is the feedback loop within the network. Essentially, the output of a certain input is fed backwards through the layer, this helps predict the final output by better training the network (back propagation). Within the RNN, the nodes will keep track of the information it had in the previous step, if the prediction is incorrect, this value is altered to increase the accuracy of the network. RNNs are primarily used for speech recognition and natural language processing(NLP).

Convolutional Neural Networks

Convolutional Neural Networks are very similar to feedforward neural network. CNNs are a version of multilayer perceptrons, all the layers are connected to each other via channels. It has interconnected layers which allows it to classify images, there are a total of 4 layers to the CNN: Convolutional Layer, Activation Layer, Pooling Layer, and Fully Connected Layer. All these layers help generate a final output. If you want to read more about CNNs, go here. CNNs are primarily used for signal processing and image classification.

All in all, neural networks are being trained and are becoming more accurate. They’re almost everywhere and they are growing in popularity in modern day society as well, whether it be data analysis or image recognition. They will continue to play a vital role in the AI and ML field!

If you enjoyed my article, make sure to clap it and share it. Feel free to connect with me on LinkedIn 😁

We’re providing opportunities for the next generation of student thinkers, inventors, and learners, to publish their thoughts, ideas, and innovation through writing.
Our writers span from all areas of topics — from Growth to Tech, all the way to Future and World.
So if you feel like you’re about to jump into a rabbit hole of reading these incredible articles, don’t worry, we feel the same way. ;)
That’s why students x students is the place for getting your voice heard!
Sounds interesting? Why not join us on this epic journey?

students x students

--

--