Vitaly BushaevinTowards Data ScienceAdam — latest trends in deep learning optimization.Adam is an adaptive learning rate optimization algorithm that’s been designed specifically for training deep neural networks. First…Oct 22, 201811Oct 22, 201811
Vitaly BushaevinTowards Data ScienceUnderstanding RMSprop — faster neural network learningDisclaimer: I presume basic knowledge about neural network optimization algorithms. Particularly, knowledge about SGD and SGD with momentum…Sep 2, 201813Sep 2, 201813
Vitaly BushaevinTowards Data ScienceStochastic Gradient Descent with momentumThis is part 2 of my series on optimization algorithms used for training neural networks and machine learning models. Part 1 was about…Dec 4, 201715Dec 4, 201715
Vitaly BushaevinTowards Data ScienceHow do we ‘train’ neural networks ?I. IntroductionNov 27, 20179Nov 27, 20179
Vitaly BushaevintechburstImproving the way we work with learning rate.I. IntroductionNov 16, 201710Nov 16, 201710