WEEK 5: TRAININGS RESNET-50, ALEXNET, VGG16

FATİH ERBAĞ
bbm406f19
Published in
3 min readDec 29, 2019

LET’S BEGIN!!!!

This week we will share with you the results of our trainings. Last week we said we’d try more than one model. Our first guests this week were Resnet50, VGG16, AlexNet.

As you know, we use the ISIC data archive. We have shared the scripts needed to download this archive in the previous weeks. If you have downloaders among you, they have noticed that the size of our dataset is extremely large. We’re talking about 50 GB. This can be a problem for those who will use collab to train our dataset. In this case, you can create a new dataset by reducing our dataset. This week we used 2501 train 136 validation and 657 test images for Resnet50. You can allocate these rates as you want. We performed our train with this small dataset we used this week.

Hyperparameters:
Optimizer:
During the training process, we change and modify the parameters (weights) of our model to try to minimize the loss function and make our estimates as accurate as possible. To do this, we chose ADAM optimizer as the optimizer.

Loss Function:
The Lost Function is a method of evaluating how well your algorithm models your data set. If your predictions are completely off, your lost function returns a higher number. If it is quite good, it gives a lower number. Here we used the CrossEntropyLoss function.

Resnet50:

We used 20 epochs for Resnet50 and 32 for batch and 0.0001 for learning. Our train result was 98% validation for train and 88% accuracy for train and 91% accuracy for our last test set. Below you will find our Loss graphs. You can also see our play on the pretrained model below. Here we have added FC Layer, dropout, ReLU and softmax. For Resnet According to this graph, we can say that a proper model was learned for our model. If you have overfitting problems, you can play with dropout values.

TRAIN-VALIDATION

Alexnet:

By using 20 epochs in trains we made for Alexnet, we realized our batch size of 32 and leraning rate of 0.0001. As a result of the train I made, we had an accuracy of 96% for the train, 83% for the validation and 88% for the test set. Below you will find our Loss graphs.

TRAIN-VALIDATION

VGG16:

For the VGG16 trainee, we used 20 epochs, our batch size was 32 and our learning rate was 0.0001. As a result of the train we made, we had 97% validation for train and 86% accuracy for our test set. You can also see our play on the pretrained model below. Here we have added FC Layer, dropout, ReLU and softmax.

TRAIN-VALIDATION

See you next week if we survive…

--

--