# Build a Neural Network in 5 minutes to accelerate late invoice and debt collection using Keras and Python — Part 1

I started learning Python and Django a few months ago on Treehouse. In between my tech degree modules, I like to unwind by learning something new in Python. I developed an interest in deep learning models, artificial intelligence and the different tools used to build neural network models.

Many businesses, including startups, have a problem with late paying customers. Just take a look at the number of search terms and trends on google. The market for late payments and debt collection is estimated to be in the billions of dollars.

I expected the US to have the largest number of inquiries related to “late payment” not sure why Singapore is the top country. If you know why, leave a comment!

Some startups are tackling the problem of late payments using artificial intelligence. They are building smart apps that automate the process of contacting customers using different communication agents. I thought it would be interesting to learn more about neural networks and artificial intelligence by applying python code to a deep learning model.

Our example is built with **Keras****, **a simple yet powerful deep learning python library. Please see installation requirements for Keras. You need to install a backend engine such as TensorFlow for the API to work.

**Deep Learning and Neural Networks**

Our brains have networks of connected neurons that use patterns to learn and remember stuff. We constantly feed our brains data using **forward propagation**.

Neural networks utilize the same concept of pattern recognition in our brains by building connections between input data neurons, passing them to other children neurons for processing to get a final output. We call this process **forward propagation.**

*Neural Networks try to find patterns in data. We call it Deep Learning because of the levels of hidden layers within the network, this is where the learning or training happens.*

**Backpropagation and minimizing errors**

In Deep Learning we compare the model’s expected output with the output of training dataset. Using **backpropagation**, our network will keep adjusting the weights when moving from one neuron to the next until the difference in output between that of the training dataset and model is minimized. Therefore, minimizing the error between our model and the expected output.

If we are expecting an output of 10 for example and the model gives us an output of 6, then the error is 4. The model will go back and adjust the weights between neuron connections in our model to minimize the error as much as possible.

**Sample Model**

I will start by building on an example that was inspired by reading this blog post on neural networks written in python. We will be using Keras for building our multi-layered deep learning model. The powerful python package allows for quick changes in a model’s architecture. We can quickly adjust the number of hidden layers, neurons within each layer, model activation functions, loss functions and model types without the need to rewrite functions for mathematical formulas.

In the XOR gate example below we have the three inputs and one output for each training set example. Our output depends on the first two columns where the third column is not relevant. If the input in the first two columns is 1 or 0 the output is 1, otherwise the output is 0.

After our model is trained using the data above, we want it to predict the correct output for the following test input:

The answer should be 0.

To build our model:

- We feed our model with the seven input sets. Each input has a weight going to the next hidden layer. The first hidden layer neuron is a sum of weights and values of the nodes it is connected to.
- The model will compare the resulting output with the expected output for our training data. This is the model’s error.
- The model will continue to adjust the weights of neuron connections to minimize the error between the model’s output and the expected output.
- We adjust the number of hidden layers, neurons within each layer, and the number of model iterations(epochs) to achieve optimal results of high model accuracy and lowest possible error.

Here is the model. It is actually not that long but I included some comments to explain some of the code, make sure you read them to get a better grasp of the model:

Here is a look at our model running through 20,000 iterations. Notice how our loss goes down significantly after epoch 500 but after 17,500 iterations the model cannot reach a lower loss value with the given number of layers or data.

So did we train our model well?

For the input [1, 0, 0] our model predicted an output of 0.0000001062! Very, close to our expected value of 0. At the end we also provided our inputs back to the model to predict the output and it predicted 1 and 0 for our training data very accurately. Not too bad.

Before moving on to part 2, attempt to run the model on your own. Adjust the model’s architecture by maybe removing some hidden layers, using more or less neurons per layer, adjust epochs or activation functions. See how your results differ with each change.

You can clone or download the model here:

Or you can see use try our platform for free here: