The Startup
Published in

The Startup

Neural Network Machine Learning Algorithm From Scratch in Python

Demystifying the so-called Black Box of Neural Network

Neural Network From Scratch in Python

Introduction:

Do you really think that a neural network is a block box? I believe, a neuron inside the human brain may be very complex, but a neuron in a neural network is certainly not that complex.

It does not matter, what software you are developing right now, if you are not getting up to speed on machine learning…you lose. We are going to an era where one software will create another software and perhaps automate itself.

In this article, we are going to discuss how to implement a neural network Machine Learning Algorithm from scratch in Python. This means we are not going to use deep learning libraries like TensorFlow, PyTorch, Keras, etc.

Note that this is one of the posts in the series Machine Learning from Scratch. You may like to read other similar posts like Gradient Descent From Scratch, Linear Regression from Scratch, Logistic Regression from Scratch, Decision Tree from Scratch.

You may like to watch a video version of this article for a more detailed explanation…

General Terms:

Let us first discuss a few statistical concepts used in this post.

Dot Product of Matrix: Dot product of two matrices is one of the most important operations in deep learning. In mathematics, the dot product is a mathematical operation that takes as input, two equal-length sequences of numbers, and outputs a single number.

Not all matrices are eligible for multiplication. To carry out the dot product of two matrices, The number of columns of the 1st matrix must equal the number of rows of the 2nd. Therefore, If we multiply an m×n matrix by an n×p matrix, then the result is an m×p matrix. Here the first dimension represents rows and the second dimension represents columns in a matrix. Note that the number of columns in the first matrix should be the same as the number of rows in the second matrix. This is represented by the letter n here.

Dot Product of Matrix

Sigmoid: A sigmoid function is an activation function. For any given input number n, the sigmoid function maps that number to output between 0 and 1.
When the value of n gets larger, the value of the output gets closer to 1 and when n gets smaller, the value of the output gets closer to 0.

Sigmoid Function
Sigmoid Function used in Machine Learning Classification

Sigmoid Derivative: the derivative of the sigmoid function, is the sigmoid multiplied by one minus the sigmoid.

The derivative of the Sigmoid Function

Implementation:

Import Libraries:

We are going to import NumPy and the pandas library.

import numpy as np
import pandas as pd

Load Data:

We will be using pandas to load the CSV data to a pandas data frame.

df = pd.read_csv('Data.csv')
df.head()
Classification Data for Neural Network from Scratch

To proceed further we need to separate the features and labels.

x = df[['Glucose','BloodPressure']]
y = df['Diabetes']

After that let us define the sigmoid function.

def sigmoid(input):    
output = 1 / (1 + np.exp(-input))
return output

There is one more function that we are going to use. It is related to sigmoid and called the sigmoid derivative function.

# Define the sigmoid derivative function
def sigmoid_derivative(input):
return sigmoid(input) * (1.0 - sigmoid(input))

Then we need to define the network training function as below.

def train_network(features,label,weights,bias,learning_rate,epochs):                                         for epoch in range(epochs):       
dot_prod = np.dot(features, weights) + bias
# using sigmoid
preds = sigmoid(dot_prod)
# Error
errors = preds - label
deriva_cost_funct = errors
deriva_preds = sigmoid_derivative(pred)
deriva_product = deriva_cost_funct * deriva_pred
#update the weights
weights = weights - np.dot(featurest, deriva_product) * learning_rate
loss = errors.sum()
print(loss)
for i in deriva_product:
bias = bias - i * learning_rate

After that let us initialize the required parameters

np.random.seed(10)
features = x
label = y.values.reshape(1000,1)
weights = np.random.rand(1,2)
bias = np.random.rand(1)
learning_rate = 0.0004
epochs = 100

We are ready to train the network now:

Training Neural Network from Scratch in Python

End Notes:

In this article, we discussed, how to implement a Neural Network model from scratch without using a deep learning library. However, if you will compare it with the implementations using the libraries, it will give nearly the same result.

The code is uploaded to Github here.

Happy Coding !!

--

--

--

Get smarter at building your thing. Follow to join The Startup’s +8 million monthly readers & +756K followers.

Recommended from Medium

10 Types of Cross-Validation in Machine Learning

Machine Learning As I Learn- 0.0.1

How To Train Your AI Dragon (Safely, Legally And Without Bias)

Implementing the XOR Gate using Backpropagation in Neural Networks

Is Neural Architecture Search really worth it ?

Modeling COVID-19

How can Machine Learning solve inventory challenges for Retailers?

Week 5# Identification of Artists and Movements from Paintings with Machine Learning

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Dhiraj K

Dhiraj K

Data Scientist & Machine Learning Evangelist. I like to mess with data. dhiraj10099@gmail.com

More from Medium

Analysis of the arXiv Papers for a Topic Using the arXiv API

Explore Bagging Algorithm in Python

Deploy Deep Learning model for real-time fraud detection in 2 hours — IBM Machine Learning for…

Term Frequency — Inverse Document Frequency

An image displaying a few sentences