Getting Started With Neural Networks

Paras Patidar
MLAIT
Published in
4 min readDec 15, 2019

What You Will Learn?

  • Introduction to Neural Networks
  • Building blocks of a Neural Network Model

Picking the right network architecture is more an art than a science; and although there are some best practices and principles you can rely on, only practice can help you become a proper neural-network architect.

Introduction to Neural Networks

Deep Learning is done with the help of neural networks. They work the same as the human brains works. Below picture shows the similarity of Biological Neuron with Artifical Neural Networks.

It is made of different layers which are stacked together on top of each other to make neural networks learn the representations and complex features from data.

Example,

This example is from MNIST Dataset, where we pass an image and it goes through different layers to understand the different representations and complex features of the data and then predicts the underlying result. Image classification is done with the help of Convolutional Neural Networks.

Want to learn more ??

Building Blocks Of Neural Networks

Neural Networks revolves around these following things, which are used to build and train a neural network :

Deep Learning With Python — Francois Challote
  • The input data and the corresponding target
  • Weight Initialization
  • Layers which are stacked or combined together to form a model.
  • The Loss Function, which is the feedback signal used for learning
  • The Optimizer, which defines us how the learning proceeds

Input Data and the target

Choose your dataset, and preprocess that data to make it clean and useful for the neural network model. Here, we set our features and target values for our network model.

Weights Initialization

Initializing appropriate weights to the model can help, learn your model better and reduce loss. There are different methods which you can use for weight initialization:

  • Constant Weights
  • Random Uniform Methods
  • General Rule
  • Normal Distribution
  • and so on...

Layers

This is the building block of neural networks which are stacked or combined together to form a neural network model.

It is a data-preprocessing module that takes one or more input tensors and outputs one or more tensors. These layers together contain the network’s knowledge. Different layers are made for different tensor formats and data processing.

Example: Vector data, 2D tensors of shape (samples, features), is often processed by dense or fully connected layers. Sequence data, 3D tensors of shape (samples, timesteps, features) is processed by recurrent layers such as LSTM layer. Image Data is processed by convolutional layers.

Want to learn about Tensors ?? , check this article :

Code : We are sequentially connecting two dense layers in Keras.

from keras import models

from keras import layers

model = models.Sequential()

model.add(layers.Dense(32, input_shape=(784,)))

model.add(layers.Dense(32))

Loss Function and Optimizer

After neural network architecture is defined, you have to choose two more things:

  • Loss Function (Objective Function): We have to minimize this quantity during training by choosing the correct loss function.
  • Optimizer: It will update the neural network model on the basis of the loss function. It implements a specific variant of stochastic gradient descent (SGD).

A neural network that has multiple outputs may have multiple loss functions (one per output). But the gradient-descent process must be based on a single scalar loss value.so, for multi loss networks, all losses are combined (via averaging) into a single scalar quantity.

Choosing the right Loss Function for the problem is very important, the neural network can take any shortcut to minimize the loss. So, if the objective doesn’t fully correlate with success for the task at hand, your network will end up doing things you may not have wanted.

For common problems like Classification, Regression and Sequence prediction, they are simple guidelines to choose a loss function. For,

  • Two- Class classification you can choose binary cross-entropy
  • Multi-Class Classification you can choose Categorical Cross-entropy.
  • Regression Problem you can choose Mean-Squared Error

Only when you’re working on truly new research problems will you have to develop your own Loss functions.

I will be writing more articles on neural networks to give you hands-on experience.

Thank You!

Stay Tuned & Stay Connected with #MLAIT

Follow Us for more tutorials on ML, AI and Cloud and join telegram group of MLAIT

--

--