A Neural Network fully-coded in Numpy and Tensorflow
Super simple, can’t be any easier, code provided

In a previous post, we explained the mechanics behind Neural networks. In this post we will show a basic implementation in pure Numpy, and in TensorFlow.
As we previously explain, neural networks execution have 4 main steps:
- Forward step (where we go from inputs to outputs)
- Loss function (where we compare the calculated outputs with real outputs)
- Backward step (where we calculate the first delta at the loss function and then back-propagate)
- Optimization step (where we update the internal weights with deltas and learning rate)

The easiest way to do a full working example, is to take only one operator (matrix multiplication), one loss function (RMSE), one optimizer(gradient descent) and execute a full running example.
def forwardMult(A,B):
return np.matmul(A,B)
def backwardMult(dC,A,B,dA,dB):
dA += np.matmul(dC,np.matrix.transpose(B))
dB += np.matmul(np.matrix.transpose(A),dC)
#Loss example in forward and backward (RMSE)
def forwardloss(predictedOutput,realOutput):
if(predictedOutput.shape == realOutput.shape):
loss = np.mean( 0.5*np.square(predictedOutput - realOutput))
else :
print("Shape of arrays not the same")
return loss
def backwardloss(predictedOutput,realOutput):
if(predictedOutput.shape == realOutput.shape):
deltaOutput = (predictedOutput - realOutput)/predictedOutput.size
else :
print("Shape of arrays not the same")
return deltaOutput
#Optimizer example (SGD)
def updateweights(W,dW,learningRate):
W -= learningRate * dW
The full code can be found in Numpy here, in Tensorflow here, and a comparision between both here.
The repo is made available on GitHub as a tutorial to understand how a neural network works.
In case you still have any questions, please do not hesitate to comment or contact me at: assaad.moawad@datathings.com
I would always be glad to reply to you, improve this article, or collaborate if you have any idea in mind. If you enjoyed reading, follow us on: Facebook, Twitter, LinkedIn