A Neural Network fully-coded in Numpy and Tensorflow
Super simple, can’t be any easier, code provided
In a previous post, we explained the mechanics behind Neural networks. In this post we will show a basic implementation in pure Numpy, and in TensorFlow.
As we previously explain, neural networks execution have 4 main steps:
- Forward step (where we go from inputs to outputs)
- Loss function (where we compare the calculated outputs with real outputs)
- Backward step (where we calculate the first delta at the loss function and then back-propagate)
- Optimization step (where we update the internal weights with deltas and learning rate)
The easiest way to do a full working example, is to take only one operator (matrix multiplication), one loss function (RMSE), one optimizer(gradient descent) and execute a full running example.
def forwardMult(A,B):
return np.matmul(A,B)
def backwardMult(dC,A,B,dA,dB):
dA += np.matmul(dC,np.matrix.transpose(B))
dB += np.matmul(np.matrix.transpose(A),dC)
#Loss example in forward and backward…