SushmitaOptimizersThis is part 2 of optimizers in which we will get an idea of how SGD with momentum and Adagrad optimizers work and why we require them. In…Jan 29, 2022Jan 29, 2022
SushmitaGradient Descent, Stochastic Gradient Descent and Mini Batch SGDIn neural networks our main aim is to reduce the loss function so that we can get better prediction with high accuracy. To reduce the loss…Nov 20, 2021Nov 20, 2021
SushmitaUnivariate, Bivariate and Multivariate AnalysisData Visualization is the technique to understand your data in order to get the insights from data. In this post we will look at…Nov 16, 2021Nov 16, 2021