Training a neural network for best result is still not an easy task. This post lists down many important tricks & methods which helps in training better neural network.

Image for post
Image for post
Image source :

Having done a lot of research, paper implementations and almost training more than dozens of different CNN, LSTM models, I decided to share the tricks & methods learned during the process. These are not mine all tricks but have been learned from some paper etc. I will be writing about only those one which proved beneficial in my experiment. This will definitely help you improve your training accuracy and validation loss.

This is on-going Post, meaning I will be adding more tricks as and when I come across.

Trick 1 : No Augmentation

After finishing training the network as you regularly do, restart the training with very low learning rate for small number of epochs (3–6) WITHOUT data augmentation. This fine tuning makes much of difference and gives you 2%–3% accuracy improvement (icing on the cake). This was discovered by Prof. …

Recommendation engines based on Computer Vision have more impact on e-commerce business metrics and improve customer experience

Image for post
Image for post
Picture depicting typical Human behavior. | artwork by Subhash @ Brillio


With the success of supervised learning, CNN, high computing power and open source libraries, the field of Computer vision (CV) have reached to a level where many of the human task are imitated by computers.

In this article I will explain how we at Brillio built next generation recommendation engine for e-commerce industry

These Visual Similarity Recommendation Engines works in same way as human act while shopping. This helps in much better cloth discovery experience and improves business metric.

I presented paper on this at Indian Institute of Ahmedabad April 2019. One can read fine details in the paper. …

A roundup of methods to tackle class imbalance

Image for post
Image for post
General representation of the problem of classifier trained with imbalance class

Life isn’t fair. There is unequal distribution of natural resources across countries, unequal distribution of money, unequal distribution of political powers and so on. …

Multi-Agent RL algorithms are notoriously unstable to train. This article describe a way to stabilise training along with experimental results for Unity Tennis environment.

OpenAI published a paper on Jan 2018 for multi -agent RL which uses decentralised actor, centralised critic approach. Tough it improved much upon existing MA-RL algorithm and showed very good result but it is still unstable to train. DDPG algorithm is difficult to train but is more stable. I will describe way to highly improve upon this. Lets first understand OpenAI paper in brief.

I assume that the readers have knowledge of reinforcement learning (actor -critic in specific) so not going into it. …


abhishek kushwaha

A Data scientist & Deep learning engineer with Computer vision and NLP specialisation

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store