Abhishek SharmainDataDrivenInvestorTransformer break-down : Positional EncodingAttention paper first introduced the pure attention based architecture which did away with recurrence and implicit positional information…Nov 29, 2021Nov 29, 2021
Abhishek SharmainTowards Data ScienceSupercharging NumPy with NumbaRunning your loop/NumPy code at C/FORTRAN speedsMar 17, 20211Mar 17, 20211
Abhishek SharmainTowards Data SciencePyTorch JIT and TorchScriptA path to production for PyTorch modelsNov 10, 20205Nov 10, 20205
Abhishek SharmainTowards Data ScienceNeural Collaborative FilteringSupercharging collaborative filtering with neural networksDec 16, 20196Dec 16, 20196
Abhishek SharmainTowards Data ScienceeXtreme Deep Factorization Machine(xDeepFM)The new buzz in the recommendation system domainDec 12, 20192Dec 12, 20192
Abhishek SharmainTowards Data ScienceAttention-based Neural Machine TranslationAttention mechanisms are being increasingly used to improve the performance of Neural Machine Translation (NMT) by selectively focusing on…Mar 9, 20192Mar 9, 20192
Abhishek SharmainTowards Data ScienceDecrypting your Machine Learning model using LIMEWhy should you trust you Machine Learning model?Nov 4, 20183Nov 4, 20183
Abhishek SharmainTowards Data ScienceAlgorithms for hyperparameter optimisation in PythonHyperparameters generally have a significant impact on the success of machine learning algorithms. A poorly configured ML model may…Nov 3, 2018Nov 3, 2018
Abhishek SharmainTowards Data ScienceWhat makes lightGBM lightning fast?Understanding GOSS and EFB; The core pillars of lightGBMOct 15, 20183Oct 15, 20183