Types of neural networks

Çağatay Tüylü
Çağatay Tüylü
Published in
4 min readSep 12, 2022

There are several varieties of neural networks, each of which serves a particular function. While this is not an exhaustive list, the following are typical of the most frequent types of neural networks encountered for popular use cases:

Frank Rosenblatt invented the perceptron in 1958, making it the earliest neural network. It is the most basic type of neural network, with only one neuron:

This article has mostly focused on feedforward neural networks, often known as multi-layer perceptrons (MLPs). They are made up of three layers: an input layer, a concealed layer or layers, and an output layer. While these neural networks are also known as MLPs, it is crucial to highlight that they are made up of sigmoid neurons rather than perceptrons, as most real-world issues are nonlinear. These models are typically fed data to train them, and they serve as the foundation for computer vision, natural language processing, and other neural networks.

Convolutional neural networks (CNNs) are similar to feedforward networks in that they are used to recognize images, patterns, and/or perform computer vision. These networks use linear algebra concepts, namely matrix multiplication, to find patterns inside an image.

Every Machine Learning algorithm learns how to map an input to an output. In the case of parametric models, the algorithm learns a function from a few weight sets:

Input -> f(w1,w2…..wn) -> Output

In the case of classification issues, the algorithm learns the function that divides two classes — this is referred to as a Decision border. A decision boundary assists us in identifying whether a particular data item belongs in the positive or negative class.

Artificial Neural Network (ANN)

A single perceptron (or neuron) may be thought of as a Logistic Regression. Each layer of an artificial neural network, or ANN, is made up of many perceptrons/neurons. Because inputs are solely processed in the forward direction, ANN is also known as a Feed-Forward Neural Network:

ANN

As you can see, ANN is made up of three layers: input, hidden, and output. The input layer receives inputs, the hidden layer processes them, and the output layer outputs the result. Essentially, each layer attempts to learn certain weights.

ANN can be used to solve problems related to:

  • Tabular data
  • Image data
  • Text data

Any nonlinear function may be learned by an Artificial Neural Network. As a result, these networks are commonly referred to as Universal Function Approximators. ANNs are capable of learning weights that map any input to any output.

The activation function is one of the primary reasons for universal approximation. Activation functions provide the network nonlinear features. This assists the network in learning any complicated input-output relationship.

Recurrent Neural Network (RNN)

RNN has a recurring relationship to the concealed state. This looping requirement ensures that the supplied data has sequential information.

We can use recurrent neural networks to solve the problems related to:

  • Time Series data
  • Text data
  • Audio data

While making predictions, RNN collects the sequential information available in the input data, i.e. the dependence between the words in the text:

The parameters of RNNs are shared between time steps. This is commonly referred to as parameter sharing. As a result, there are fewer parameters to train, lowering the computational cost.

Convolution Neural Network (CNN)

Convolutional neural networks (CNN) are now popular in the deep learning community. These CNN models are employed in a variety of applications and areas, but they are notably common in image and video processing projects.

CNNs are made up of filters, also known as kernels. Using the convolution technique, kernels are utilized to extract the relevant information from the input. Let’s try to understand the significance of filters by utilizing photographs as input data. When you combine a picture with filters, you get a feature map:

Though convolutional neural networks were developed to handle problems with picture data, they can function well with sequential inputs.

CNN learns the filters on its own, without being expressly told. These filters aid in the extraction of relevant characteristics from the incoming data.

--

--