Abhishek SharmainDataDrivenInvestorTransformer break-down : Positional EncodingAttention paper first introduced the pure attention based architecture which did away with recurrence and implicit positional information…7 min read·Nov 29, 2021----
Abhishek SharmainTowards Data ScienceSupercharging NumPy with NumbaRunning your loop/NumPy code at C/FORTRAN speeds6 min read·Mar 17, 2021--1--1
Abhishek SharmainTowards Data SciencePyTorch JIT and TorchScriptA path to production for PyTorch models5 min read·Nov 10, 2020--5--5
Abhishek SharmainTowards Data ScienceNeural Collaborative FilteringSupercharging collaborative filtering with neural networks10 min read·Dec 16, 2019--6--6
Abhishek SharmainTowards Data ScienceeXtreme Deep Factorization Machine(xDeepFM)The new buzz in the recommendation system domain7 min read·Dec 12, 2019--2--2
Abhishek SharmainTowards Data ScienceAttention-based Neural Machine TranslationAttention mechanisms are being increasingly used to improve the performance of Neural Machine Translation (NMT) by selectively focusing on…8 min read·Mar 9, 2019--2--2
Abhishek SharmainTowards Data ScienceDecrypting your Machine Learning model using LIMEWhy should you trust you Machine Learning model?8 min read·Nov 4, 2018--3--3
Abhishek SharmainTowards Data ScienceAlgorithms for hyperparameter optimisation in PythonHyperparameters generally have a significant impact on the success of machine learning algorithms. A poorly configured ML model may…8 min read·Nov 3, 2018----
Abhishek SharmainTowards Data ScienceWhat makes lightGBM lightning fast?Understanding GOSS and EFB; The core pillars of lightGBM6 min read·Oct 15, 2018--3--3