Improving Car cost Predictions with Feed-forward neural networks

Damian Orellana
3 min readJun 10, 2020

With PyTorch

In a Previous blog we see how to use Linear Regression to predict the prices of Cars, in this part we’ll see how we can improve the previous Model with Feed forward and how this improve the predictions.

What is a Feed forward neural network?

This type of neural network is a simple artificial neural network, where the information moves in only forward, from inputs, through hidden layers and to the outputs.

Previously we use only one Linear Regression function to predict:

Model with only one Linear Reg.
Validation Loss of First Model

With this model can get a validation loss with value 7.9774 after many fittings. And the predictions was not so closer to the targets.

We need to improve this linear regression model to make better predictions, so to do that we can use a neural network with one hidden layer and an activation function.

With this changes the model can learn more complex, the activation function that we are using is called Rectified Linear Unit (ReLU), this activation function has a simple formula: relu(x) = max(0,x) . This formula make that any element negative it’s replaced with a 0.

Validation Loss of Improved Model

The size that is given here to the Hidden layer is 32, when we implement this and fit, the validation loss that we get is 4.6235 .

The difference between the first loss and this loss is not too high, but helps a little to the predictions, as we can see here:

Prediction of item 10 in First Model
Prediction of item 10 in Improved Model

In this two examples of the item 10, we see the difference of predictions.

Even so, the prediction is very different to the target.
Can we get much better predictions? Yes, we can add one more hidden layer and try improve more the results to get closer predictions.

Improved Model with two Hidden Layers

Here only add one more layer with an activation, the sizes of each layer are 32 and 16, this gives more and more complexity to the model, helping to learn more complex information.

Model with 2 hid-layer

This don't make a big change, but we verify that increases the prediction accuracy, giving us much better results.

With this change the validation loss don't decreases so much, now is 3.6327 , is only 0.99 (approx) lower.

As we can see here in the next examples the predictions are closer to the targets, we see the prediction of item 0, 10 and 20:

Prediction of item 0
Prediction of item 10
Prediction of item 20

The first prediction sample have a difference between the target and the prediction.

Even so, the second and third sample are very close to the target, specially the second where the difference is only 0.56 approx.

With this simple Feed Forward neural network with 2 hidden layers we can see how much this improves the Model, and we can add more if is needed and see if this improves the predictions.

This is very helpfully, we see how adding more layers can improve the entire Model and get predictions closer to the targets, this also can be used in logistic regression, like classification and others.

After see all this examples and concepts, we now can make better neural networks and know that with different quantity of hidden layers and its sizes can make better predictions.
Obviously this is only a little part of the neural network knowledge, and are much more types and Models that can be used, even so, is a good starting and if want to learn more about this awesome field of computer science you can go to Jovian.ml and freecodecamp.org.

To see the entire notebook with its different versions and validation losses, you can go to this link.

--

--