Keras optimizers comparison on GAN

Tuning variables :

Learning rate is set to 0.002 and all the parameters are default. Trained with 2000 epochs and 256 batch size.

Optimizers will be compared are :

SGD

Stochastic gradient descent optimizer.

Includes support for momentum, learning rate decay, and Nesterov momentum.

SGD Output

RMSprop

RMSProp optimizer.

RMSProp Output

Adagrad

Adagrad optimizer.

Adagrad is an optimizer with parameter-specific learning rates, which are adapted relative to how frequently a parameter gets updated during training. The more updates a parameter receives, the smaller the updates.

Adadelta

Adadelta optimizer.

Adadelta is a more robust extension of Adagrad that adapts learning rates based on a moving window of gradient updates, instead of accumulating all past gradients. This way, Adadelta continues learning even when many updates have been done. Compared to Adagrad, in the original version of Adadelta you don’t have to set an initial learning rate. In this version, initial learning rate and decay factor can be set, as in most other Keras optimizers.

Adam

Adam optimizer.

Adam Output

Adamax

It is a variant of Adam based on the infinity norm.

Adamax Output

Nadam

Nesterov Adam optimizer.

Much like Adam is essentially RMSprop with momentum, Nadam is Adam RMSprop with Nesterov momentum.

Nadam Output

Code : https://colab.research.google.com/drive/1VGJS1zjsugpi2Q3Bc98H5Tw7RGN-dIEq

Sources: https://keras.io/

Like what you read? Give Tayyip Gören a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.