A Neural Network fully-coded in Numpy and Tensorflow

Super simple, can’t be any easier, code provided

Assaad MOAWAD
DataThings

--

In a previous post, we explained the mechanics behind Neural networks. In this post we will show a basic implementation in pure Numpy, and in TensorFlow.

As we previously explain, neural networks execution have 4 main steps:

  1. Forward step (where we go from inputs to outputs)
  2. Loss function (where we compare the calculated outputs with real outputs)
  3. Backward step (where we calculate the first delta at the loss function and then back-propagate)
  4. Optimization step (where we update the internal weights with deltas and learning rate)
Neural networks step-by-step

The easiest way to do a full working example, is to take only one operator (matrix multiplication), one loss function (RMSE), one optimizer(gradient descent) and execute a full running example.

def forwardMult(A,B):
return np.matmul(A,B)

def backwardMult(dC,A,B,dA,dB):
dA += np.matmul(dC,np.matrix.transpose(B))
dB += np.matmul(np.matrix.transpose(A),dC)

#Loss example in forward and backward

--

--

Assaad MOAWAD
DataThings

Interested in artificial intelligence, machine learning, neural networks, data science, blockchain, technology, astronomy. Co-founder of Datathings, Luxembourg